Jump to content

  • Log In with Google      Sign In   
  • Create Account

Miscellaneous Programming Notebook



AntTweakBar wrapper for C#

Posted by , 20 April 2014 - - - - - - · 929 views

Hello,

I've just uploaded my C# wrapper for the AntTweakBar GUI library at https://github.com/TomCrypto/AntTweakBar.NET. Some of you might remember me already posting about this on the forum, this is a completely new and updated version, much much better than the original one. Feel free to check it out, and enjoy Posted Image


A tool I wrote

Posted by , 20 December 2013 - - - - - - · 1,939 views

Hi,
I made a little Python 3 tool over the last couple days, it's basically a console-based todo-list/bugtracker which is easy to use and targeted at single programmers or small teams working on smallish projects or prototypes who want something in addition to their usual source control software to better keep track of todo's and bugs that need to be (or have already been) fixed in a more convenient way than using a disposable text file or whatever. It is obviously not intended to be a complete and absolute replacement to a real bug tracking infrastructure, but it is considerably easier and faster to set up with a lot less overhead while still being a lot better than nothing (in my opinion). The idea is that it does both, and integrates reasonably well with a development workflow, imho.

I kinda wrote it to take a break and just complete something, it's fairly bare-bones but I'm posting it here in case anyone finds it useful. The script works under Linux, BSD, and Windows, and so probably Mac too, though I haven't tested the latter. You'll need a few pip packages for Windows, so grab the setuptools and pip installers (v3.3) from here (after installing Python 3), and then use the pip.exe you get to install "termcolor" and "colorama", and the script will work. Might have to turn on unicode too for cmd.exe, given it doesn't really support it: just type "chcp 1250" in it beforehand, that worked for me.

Here is the github repository. Feel free to send me pull requests if you have any suggestions for improvement, additional features, or have encountered any bugs, it's honestly a tiny script so I'm open to anything that could make it a better tool. It wasn't a huge time investment to write it, but again, if it's worth doing, it's worth doing well.


More updates!

Posted by , 03 August 2013 - - - - - - · 951 views

Hey everyone,
I've been working pretty hard in my free time to expand the range of platforms and architectures supported by my crypto library. I have quite a lot of work at uni (assignments and everything) so I can only work on it for short periods of time, but work is getting done (slowly). Currently the library works under:

- any Windows operating system, using MinGW (that's both 32-bit and 64-bit)
- any Linux and BSD flavour, under the following processor architectures:
* x86 (32-bit intel/amd)
* x86_64 (64-bit intel/amd)
* 32-bit PowerPC
* ARMv5 (or at least one model of it)

This was achieved through a lot of VM testing, mostly using QEMU. The goal is basically to get a variety of architectures working in order to finalize the design of the platform-selection preprocessor code (to make sure it is flexible enough to handle even very unusual hardware) before moving into the "inflationary stage" where more features (algorithms and everything) will be very quickly added into the framework.

I am considering setting up a permanent "testing box" hosting a large number of VM's and giving me SSH access to every platform I need, so that I can just write a script to let me test them all at once (and, if needed, debug remotely) which would certainly be a big help, but I haven't really thought of that more in detail just yet.

The PowerPC implementation in particular gave me pause. Until that point I had only been using little-endian architectures, so while I had done my best to make my code endian-neutral, I expected to have to fix some things. Overall it wasn't too bad at all, with only a couple mistakes, however one algorithm in particular (the Skein-256 hash function) had me reading its specification over again to work out precisely what endianness convention it was using. This took me a few hours, since my original implementation wasn't exactly very elegant, but in the end I finally got everything working!

As for Mac support, well, it's been dropped for the moment. I can't get an OS-X VM working (seems they've gone out of their way to make it hard to virtualize their hardware) and I am not willing to purchase any Apple hardware just to work on it for a few minutes, and of course there are no OS-X computers at uni that I could use, so it'll have to wait (if anyone using OS-X wants to give building the library a try, I'd be very grateful). But I don't suspect it would really be very hard, given that Mac is essentially an Unix-based operating system, and the only thing missing will be the preprocessor code necessary to detect the operating system. In other words, I expect it to work out of the box. But of course I can't test that assumption Posted Image

In any case, I am pretty happy right now, the library is stable and I think my goal of making it flexible enough to easily support adding and removing entire modules has been satisfactorily met. To conclude, here's a comparison between an overview of the library about a month ago (top) and the library as of today (bottom). I've broken down some of the headers to make them more manageable, and now you can really start to see its modular nature, as well as any clear dependencies between each part of the library. Software design is kind of fun!

Posted Image






Posted Image


I might make a little video eventually showing the evolution of the library's architecture over time, should be quite interesting to watch actually.

So, yeah, that's what I've been working on and off for the past year or so, and I'm hoping I'll be able to release a finished product by the end of 2013. The library is more or less usable at the moment, but there aren't too many features available so far, due to work needing to be done on the framework (the "glue" holding every part of the library together) before. But as I mentioned previously, this refactoring that's been going on for over two months is almost finished, so the real fun can begin soon.

And on that note, good night Posted Image


Progress on my crypto library

Posted by , 07 July 2013 - - - - - - · 862 views

Hello GameDev Posted Image

I've been working quite hard lately to overhaul my library's test driver. I hadn't updated that part of the library since back in june 2012 or so and so as predicted it was in an advanced state of decay, completely out of sync with the new coding style guidelines and other stuff. Anyway, I have revived it. It's proved to be a long and tedious process, mostly because there is so much stuff to test and the code is rather boilerplate.

I also wrapped the unit testing back-end into a nice graphical interface. This was partly because I needed eye-candy to look at while working, but I can rationalize it by claiming it makes scanning the output easier (it's hard to differentiate "PASS" and "FAIL" at a glance in white, fixed-font letters). So now passing tests show up in soothing green, and failing tests are displayed in threatening red. Wow, who would've thought of that?

This is what I have so far:

Posted Image


It's not much, but it's a start. The old program had 63 test vectors (grouped under 6 main categories) which covered most of the code pretty well, but incidentally did not test much of the rest of the library's features (which are going to become important once I start cross-compiling this onto every platform in sight, since I won't have the luxury of doing testing directly on the targeted hardware). In fact I uncovered a couple bugs while implementing additional tests, one minor and one rather major but which hadn't come up in testing due to.. well.. crappy unit tests Posted Image that'll teach me.

I actually wrote a Python script to convert my old "text-based test vector format" into the "new" format (a static array of structures directly embedded into the code). This makes things a lot simpler, especially since I no longer depend on an external file, so no need to parse it and handle any errors. And it's not like rebuilding is difficult, either, if you're developing tests you're supposed to be able to compile Posted Image

The C unit testing "framework" was also pretty simple, really just an array of function pointers which each tested one part of the library, and returned success via an integer return value (they were also provided a char buffer to report any additional information, and a FILE* to dump really detailed debugging information into).

There is just so much stuff to do though, the sheer amount of code to test, audit, maintain, and upgrade often seems overwhelming. But I'll get there eventually Posted Image

--

In other news, Ordo is now working under Windows (both 32-bit and 64-bit), all Linux flavours under x86 and x86_64, and all BSD variants under the same architectures. Supported compilers are gcc, mingw, and clang (I think I'm going to only support those - supporting multiple compilers simultaneously is a pain, especially since MSVC seems to have alternative definitions for literally everything)

I have also been reluctantly forced to at least partially drop C89 support. It turns out stdint.h is actually a C99 header, which means it's basically impossible to write portable, strictly standard C89 code without a crapload of #ifdef's to essentially reimplement the missing headers oneself, which I am not ready to do at this time. Perhaps later I'll switch to an "arch folder" design, but for now this is how things are.. ah, you gotta love C with unhelpful standards and endless pedantry..

--

Anyway, my work here is done for tonight - the old tests have been 100% ported to the new testing software, now I only need to clean everything up http://public.gamedev5.net//public/style_emoticons/default/tongue.png


So I was bored and this happened..

Posted by , 29 June 2013 - - - - - - · 788 views

Hello,
I couldn't get to sleep tonight (it's 6 in the morning) so I did what all software developers do when idle - I created an assembly language. No, really, I wanted to try my hand at creating some sort of toy interpreted assembly prototype for a while now.

So, without further ado, behold the next trendy programming language, aptly codenamed "asm"! Posted Image

here be dragons

WORDSIZE 32
REGISTER 3

; Calculates the GCD of two integers

@start
	; Pop arguments off the stack (we know how many we expect)
	; Now R0 and R1 contain the two arguments (in correct order)
	POP R1
	POP R0
	JMP loop

@loop
	; If R1 is zero, we're done and gcd is R0
	CMP R1 0
	EQL end

	; Else do standard operation
	MOV R2 R1
	MOV R1 R0
	MOD R1 R2
	MOV R0 R2
	JMP loop

@end
	; Clear the stack
	CLR

	PUSH 1  ; Size of output array (a single integer)
	PUSH R0 ; The GCD of the two inputs
	RET     ; We're done!

WORDSIZE 32
REGISTER 4

; Outputs the first N terms of the Fibonacci sequence

@start
	; Pop arguments off the stack (we read the number of elements desired)
	; Now R0 contains the number of Fibonnaci elements desired
	POP R0

	; Initialize R1 and R2 to the Fibonnaci sequence
	MOV R1 0
	MOV R2 1

	; Push the counter as the stack output size
	CLR
	PUSH R0

	JMP head

; This just takes care of outputting the first two terms if necessary
@head
	; If we are asked for zero terms.. make it so
	CMP R0 0
	EQL end

	; Output the first term
	PUSH R1
	DEC R0

	; Do we want the second term?
	CMP R0 0
	EQL end

	; Output the second term
	PUSH R2
	DEC R0

	; Begin main loop
	JMP loop

@loop
	; If R0 is zero, we've got all the terms we required
	CMP R0 0
	EQL end

	; Else, we need one more Fibonnaci term - calculate it in R3
	MOV R3 R1
	ADD R3 R2 ; R3 is now equal to R1 + R2 (the next term)
	MOV R1 R2
	MOV R2 R3 ; Shift down the two elements

	PUSH R2  ; Output the new term to the stack

	DEC R0
	JMP loop ; Next term!

@end
	RET ; Nothing else to do here, we've streamed out the outputs
WORDSIZE 32
REGISTER 3

; Adds a list of numbers passed as argument

@start
	CNT R0 ; Get the number of input elements
	MOV R1 0 ; R1 will contain the sum

	JMP loop ; Start processing

@loop
	; If R0 is zero, we've added all the numbers
	CMP R0 0
	EQL end

	; Else pop the integer and add it
	POP R2
	ADD R1 R2

	; Next element
	DEC R0
	JMP loop

@end
	CLR

	PUSH 1
	PUSH R1
	RET

And the best thing is that it actually works! This prints out the first seven Fibonacci terms:
$ ./bin/asm fibonacci.asm 7
0
1
1
2
3
5
8
The parsing and interpreting code is absolutely horrible, though, it started out nice but stuff got out of control after adding the third instruction type. It should be pretty easy to write the code elegantly, especially in C++ (yes, I am also suicidal and wrote the parser in C)

Anyway, the input format is basically, the first stack word is an integer representing the number of input words, those input words are then pushed on the stack. So for the Fibonnaci example, the stack upon running the program looks like this:

[1] [7] [...]

And the output format is very much the same, except the program basically erases the stack and outputs its stuff there. So after returning, it looks like this:

[7] [0] [1] [1] [2] [3] [5] [8] [...]

Yes, it's not very convenient, but you need to start somewhere right? It'll probably need an input and output stream distinct from the actual stack eventually.

Also, the POP instruction will segfault if the stack is empty, and the stack is assumed to be infinite when really, it's limited to 1024 words because I needed an arbitrary limit (later it will need to grow dynamically). The language also supports arbitrarily many registers (defined by the REGISTER keyword at the top - WORDSIZE is supposed to indicate the word size of the registers in bits but they are always 32 bits wide at the moment)

Ultimately my goal is to have some sort of prototyping language between assembly and C that I can use to play with algorithms and stuff and get valuable behavioural info (e.g. highest stack/heap usage achieved, instruction parallelism level, etc..) which would be otherwise difficult to obtain via the x86 instruction set or other (though I suppose tools already exist for this, but whatever, this is for fun too)

I have to say it's pretty thrilling to actually create some sort of parser/compiler that takes your programs and makes them work somehow. Now I know why people are so interested in designing languages.. they are turned on by compilers! Anyway, that's it for tonight.
By the way, feel free to voice your disgust at the misshapen horror I have just brought into existence Posted Image


A few updates!

Posted by , 24 June 2013 - - - - - - · 743 views

Hello GameDev! It's been long, far too long since my last entry. This is actually a new journal, because I've moved a bit from rendering lately for a bit to go back into my older, preferred hobby - cryptography, and generally screwing around writing code that nobody will probably ever use. Though I will openly admit this type of programming has immense educational value.

Back in June 2012, I started working on a cryptography library, called Ordo. A few of you might catch the reference to Neil Stephenson's Cryptonomicon. From the start it was never meant to be an all-encompassing library, so I restricted my scope to symmetric cryptography (that is, no RSA or elliptic curves, no SSL/TLS stuff, just low-level block cipher/hash function/etc code). Why? A few reasons:

1. I do not feel qualified to implement some of the more complex stuff, and I would not be a responsible developer if I released potentially unsafe code. There are a lot more edge cases to check, and it's definitely far too much work for a single person to take on.
2. Having a well-defined, bounded and achievable scope is the cornerstone of every successful project. It helps mitigate the infamous "feature creep" stage, and keeps you focused on your goals as much as possible.
3. Symmetric cryptography is a lot more interesting to me than the rest, even if all the cool kids say otherwise.

Ordo was initially created as an alternative to OpenSSL, which can be a nightmare to work with whenever you run into a problem (I think anyone who has ever used it for a non-trivial task can attest to that), and as such always had three main goals:

1. good performance and cross-platform compatibility (what is the point of creating an inferior product?)
2. easy to use and conventional API (heavily influenced from OpenSSL, with a few improvements)
3. good documentation (the OpenSSL documentation is at best sparse, and at worst nonexistent)

As such, the library was written in portable C89 with system-specific extensions. I'm sure many of you are already shifting uncomfortably in your seats, thinking "why not C++". Well, firstly, C++ doesn't make for very good libraries. Try exporting C++ classes from your library and see how many languages manage to interop with it (hint: not many). Then, there's the problem that C++ binary compatibility has always been a gotcha whereas C binary compatibility is fairly well understood. Finally, C is the lowest common denominator, and simply works everywhere. You can link to a C library (sometimes even statically) and expect it to work, in any language, on any compiler, on any platform. Furthermore, because C is comparatively simple, it was easy to get the boilerplate object-oriented code (involving function pointers and abstraction layers) out of the way and get down to actually writing algorithms.

Also, because performance matters, I decided to have code paths involving raw assembly code (most of which I wrote myself). This complicated things a bit and made for interesting challenges, but was overall worth it. Assembly is cool, and the performance gains are actually worth it most of the time. Of course, this is only used in the most performance-critical parts of the code. But you routinely see at least 120% speed improvements, sometimes up to 300% for particularly clever code. That makes a real difference. Of course there is always a standard C code path, and those assembly implementations are only enabled whenever the processor supports it.

The code itself is documented with doxygen. It's a bit hard to keep up sometimes, but it is actually reasonably well updated. Documentation has always been the hardest part, a constant struggle between consistency and redundancy, but I am determined to make it work in the end. I don't think I could bring myself to write a library without any documentation at all. I take great pride in making things work consistently and elegantly, and a complete and up to date documentation is part of all that. Code samples are important too, but those generally come at a later stage, when the library is sufficiently mature to make the samples meaningful (though I have already implemented a couple).

Finally, a very important part of library development is design. It needs to make sense, be well-structured, and not have stupid inter-dependencies. This is a header dependency graph for the entire library, at its latest version. The overall design has been refactored well over a dozen times, and I find dependency graphs an excellent way to eyeball the general code structure and a first step to spot any possible candidates for refactoring.

Posted Image



The internal dependency graph (of the source files) is more complicated, but the above is what is actually exposed to the user. As you can see, I ultimately went with a modular design. It turns out many of the features of the library can be stripped out without issues, when running very specialized builds or working in constrained environments. Also notice that the graph does not include any actual algorithm references (there are no SHA-256 or AES headers, for instance). This means the abstraction layers are working, and shows that the library scales.

As of the time of this writing, Ordo runs on (at least) Windows 32-bit, Windows 64-bit and all Linux distributions, on x86 and amd64 processors. I have plans to add BSD compatibility but a few issues need to be resolved first, and Mac compatibility is actually unknown (I have never tried, though I suspect it will not be much work to implement).

The github repository is at https://github.com/TomCrypto/Ordo.

--

All in all I have to say that writing a library of a larger scale than your average python script is an interesting learning experience and I'm learning a lot about software design/maintenance, even if I don't expect the library to ever become popular (plus cryptography is more or less saturated with existing open source or proprietary libraries, and adoption rate is extremely low by nature).

There is still a lot of work to do, and I think the finished product might be at least portfolio-worthy Posted Image

By the way, contributions are highly welcome, as always (should you be interested in the field)

EDIT: currently working on Mac compatibility, and there are a few API improvements in progress.





August 2016 »

S M T W T F S
 123456
78910111213
14151617181920
21222324 25 2627
28293031   

Recent Entries

Recent Comments



PARTNERS