2018-03-14, post № 192

**C**, **mathematics**, **programming**, #improper integral, #power series, #source layout, #Taylor series

Today it is the fourteenth of March 2018. Today’s date — when written in the M/D/Y format —, 3/14/18, looks close enough to Archimedes’ constant’s decimal representation for it to be the constant’s celebratory day.

As always on Pi Day, I have implemented an algorithm to generate 𝜋, albeit this year’s accuracy is not the greatest (Try it online).

typedef double d;typedef long l;l f(l n
){l f=1;while(n>1)f*=n--;return f;}d ne(d v,
l p){d r=1;for(l k=0;k<p;k++)r*=v;return r;}d
ps(d(*c)(l),l i,d x){d s=0;for(l k=0;k<i;k++)s
+=c(k)* ne(x, k);return
s;} d exc ( l
n){ return 1./f (n)
; } d exp(d x
) { return
ps(exc ,20,x);}
d G( d x){return
exp(-x *x);}d I
(d a,d b,d g,d
(* f)(d )){d cs=
0;for( d x=a;x<=
b;x +=g) cs+=f(x)
*g;return cs ; } int
main( ) { d pi_root =I(
-2.5, 2.5 , 1e-4,G); d pi
= pi_root * pi_root+(0xf&0xf0
) ; printf( "%c%c%c%c%c%f%c"
,'p','i', ' ','=',' ',pi
,'\n' ) ; }

I use various methods of generating 𝜋 throughout the Pi Days; this time I chose to use an improper integral paired with a power series. 𝜋 is calculated using a famous identity involving infinite continuous sums, roots, e, statistics and — of course — 𝜋.

Furthermore, to compute 𝑒, the following identity is used.

Both formulae are combined, the approximated value of is squared and 𝜋 is printed to `stdout`

.

You can download this program’s prettified (some call it obfuscated, see above) source code pi.c and also the (nearly, as `#include`

is not missing so that the compiler does not need to guess my dependencies) equivalent code in a more traditional source layout tpi.c.

Happy Pi Day!

2018-02-24, post № 191

**art**, **haiku**, **poetry**,

Water droplet falls,

Joining fellow fluid beings.

Ripples go their way.

2018-01-27, post № 190

**C**, **programming**, #algorithm, #bubble sort, #insertion sort, #merge sort, #natural merge sort, #quicksort, #selection sort, #time complexity

Sorting arrays of integers is an integral part of computing. Therefore, over the years, a lot of different approaches to sorting where found and resulted in a wide range of sorting algorithms existent today.

Of these the most common algorithms include quicksort — maybe one of the best known algorithms by name —, merge sort, natural merge sort — my personal favourite out of this list — and insertion, selection and bubble sort — more trivial sorting approaches. In this post, I will look at the aforementioned algorithms alongside a C implementation. The full C source code can be seen below and also downloaded.

All sorting algorithm implementations take an integer pointer and an array length as input; sorting the array in-place — manipulating the pointed to memory.

## I) Quicksort

A fundamental observation used by most sorting algorithms is that a singleton array — an array which only consists of one element — is sorted. Thus one can write an algorithm which recursively operates on already sorted arrays, as every array has a sorted *base* (all array elements interpreted as singleton arrays).

Quicksort first chooses one of the input array’s elements as its pivot element — my implementation always chooses the first element — and splits the remaining array into two subarrays, one containing all array elements less than or equal to the pivot, the other one containing those greater than the pivot. The array is rearranged such that the first subarray is followed by the pivot element which is followed by the second subarray. Both subarrays are recursively sorted; singleton arrays mark the end of recursion as they are already sorted. Quicksort — as its name implies is a rather efficient sorting algorithm, having an average time complexity of .

In the following I show my C implementation. It first creates a swap array for storing intermediate values while permuting the array and then calls its recursive part which initiates the sorting.

// quick sort (recursive part)
void _qsort(int *Arr, int *Swp, int len) {
// split array at pivot (first element) and store in swap
int l = 0, r = len-1;
for (int j = 1; j < len; j++)
if (Arr[j] < Arr[0]) Swp[l++] = Arr[j];
else Swp[r--] = Arr[j];
// move swap to array
Swp[l] = Arr[0];
for (int j = 0; j < len; j++)
Arr[j] = Swp[j];
// recursively sort split arrays
if (l > 1) _qsort(Arr, Swp, l);
if (len-r-1 > 1) _qsort(Arr+l+1, Swp+l+1, len-r-1);
}
// quick sort (initialization)
void QSort(int *Arr, int len) {
if (len < 2) return;
int *Swp = malloc(len*sizeof(int));
_qsort(Arr, Swp, len);
free(Swp);
}

## II) Merge Sort

As quicksort, merge sort also fundamentally uses the inherent sorted nature of singleton arrays. However, in contrast to quicksort, merge sort does not split the input array into a correctly placed pivot and two arrays left to sort, but rather uses a merging algorithm to merge two already sorted arrays into one sorted array — conceptually moving from the bottom up instead of from the top down.

To merge two sorted subarrays, simply take the smallest of the first elements of both subarrays to create a new array; repeating until both subarrays are empty. Once a merging function is implemented, simply recursively split the input array and merge all singleton arrays together to sort the entire array. As quicksort, merge sort also has an average time complexity of .

// merge two arrays located in memory next to each other
void merge(int *Arr, int *Swp, int llen, int rlen) {
// build array by choosing the smallest of both
// array's first elements, merging both arrays
int len = llen+rlen, l = 0, r = 0;
for (int j = 0; j < len; j++) {
if (l < llen && r < rlen)
if (Arr[l] < Arr[llen+r]) Swp[j] = Arr[l++];
else Swp[j] = Arr[llen+r++];
else if (l < llen) Swp[j] = Arr[l++];
else if (r < rlen) Swp[j] = Arr[llen+r++];
}
// move swap to array
for (int j = 0; j < len; j++)
Arr[j] = Swp[j];
}
// merge sort (recursive part)
void _msort(int *Arr, int *Swp, int len) {
// split arrays' lenghts, sort split arrays
int llen = len/2, rlen = len-llen;
if (llen > 1) _msort(Arr, Swp, llen);
if (rlen > 1) _msort(Arr+llen, Swp, rlen);
// merge arrays
merge(Arr, Swp, llen, rlen);
}
// merge sort (initialization)
void MSort(int *Arr, int len) {
if (len < 2) return;
int *Swp = malloc(len*sizeof(int));
_msort(Arr, Swp, len);
free(Swp);
}

## III) Natural Merge Sort

Instead of relying on splitting the input array into singleton lists, natural merge sort searches subarrays which naturally appear sorted and merges them to form one sorted list. The same merging algorithm as in traditional merge sort is used; as merge sort, natural merge sort also has an average time complexity of .

// natural merge sort
void NMSort(int *Arr, int len) {
if (len < 2) return;
int *Swp = malloc(len*sizeof(int));
for (int k = 0, j = 0; j < len; k = j+1) {
for (j = k; j < len-1 && Arr[j] <= Arr[j+1];) j++;
if (j < len) merge(Arr, Swp, k, j-k+1);
}
free(Swp);
}

## IV) Insertion Sort / Selection Sort

Being a rather trivial sorting algorithm, insertion sort builds up a new array by always removing the input array’s smallest element. Equivalently, selection sort always selects the input array’s smallest element. Thus I have only implemented insertion sort, not using any swap memory but only swapping array elements with each other. Insertion sort is a trivially brute force approach to sorting, making it a rather inefficient algorithm with an average time complexity of .

// insertion sort
void ISort(int *Arr, int len) {
if (len < 2) return;
// loop through array elements
for (int j = 0; j < len; j++) {
// find minimum element
int m = j;
for (int i = j+1; i < len; i++)
if (Arr[i] < Arr[m]) m = i;
// swap minimum element with current element
if (j != m) {
int swp = Arr[j];
Arr[j] = Arr[m];
Arr[m] = swp;
}
}
}

## V) Bubble Sort

Bubble sort works by iteratively finding neighbouring elements which are misaligned, swapping them to sort the entire list. Presumably, the rather amusing name comes from observing how elements behave while they are being sorted, bubbling to the top like a pop’s fizz. Its average time complexity is fairly inefficient at .

// bubble sort
void BSort(int *Arr, int len) {
while (len-- > 1)
for (int j = 0; j < len; j++)
if (Arr[j] > Arr[j+1]) {
int swp = Arr[j];
Arr[j] = Arr[j+1];
Arr[j+1] = swp;
}
}

## Conclusion

In conclusion, one most likely will not have to worry about implementing sorting algorithms, as most modern languages supply an essential tool belt to the user, complete with various sorting methods and predefined data structures dependent on sorting. Nevertheless, the theory and diversity of sorting algorithm fascinates me, as it shows how widely different tactics can be used to solve the same seemingly mundane problem; sorting an array of integers.

All implementations shown in this post have been tested several thousands of times on arrays with varying lengths — to see the test function take a look at the full source code.

2017-12-30, post № 189

**Java**, **mathematics**, **programming**, #chaos, #chaos theory, #fractal geometry, #fractal viewer, #Java 1.8

Lyapunov fractals are mathematical objects living in the plane, graphing regions of stability or chaos regarding a certain logistical map following an ever-changing population growth, alternating between two values.
I implemented a Lyapunov fractal viewer in Java 1.8 which lets you freely move around the plane, set sequences and iteration depths to explore the fractal. Source code can be seen below or downloaded, as can the Java executable (Lyapunov.java, Lyapunov.jar).

Articles on the topic of Lyapunov fractals are a post by Earl Glynn from the year 2000 in which they talk about their Lyapunov fractal generator written in Pascal — a rewrite of a software written in 1992. Also worth reading is A. Dewdney’s article about Mario Markus’ work (Scientific American, September 1991).

My first encounter with this fractal was whilst browsing Wikipedia and stumbling across its Wikipedia entry. Since it was a fractal and looked fairly intricate and thus explorable, I set out to write my own renderer — I chose to implement it in Java, as it is a compiled language with a nice set of GUI libraries. Since Wikipedia listed an algorithm for generating Lyapunov fractal images, I thought an implementation would be fairly straight-forward. However, als always, the devil is in the detail and I quickly noticed that while the short Wikipedia entry looks convincing and well-written at first glance, lacks to answer deeper questions regarding the very topic it is about.

The following is a description of the fractal generation algorithm. It is a modified version of the algorithm detailed by Wikipedia which addresses certain ambiguities the original had (more on those later).

## Fractal generation algorithm

- Take as input a complex point and a sequence consisting of the letters and ; for example .
- Construct a function which returns the zero indexed 𝑛-th sequence entry and wraps around to the sequence’s start once the sequence’s end is hit.
- Construct a function which selects either 𝑎 or 𝑏.
- Let and define .
- Calculate the Lyapunov exponent 𝜆 as follows.Since calculating the limit as 𝑁 goes to infinity is rather difficult in practice, one can simply use a sufficiently large 𝑁 and pretend to have reached infinity.
- Output a color value according to 𝜆. I chose green for 𝜆 < 𝟢 and blue for 𝜆 ≥ 𝟢, using arbitrarily defined value ranges to map 𝜆 to an actual color.

The following is a Java snippet which implements the algorithm described above.

// choose a or b according to Seq and n
static double r(String Seq, int n, double a, double b) {
if (Seq.charAt(n%Seq.length()) == 'A') return a; else return b;
}
// calculate a pixel color (0x00rrggbb) for given parameters
static int LyapunovPixel(double a, double b, String Seq, int N) {
// array of all x_n; x_0, the starting value, initialize all x_n values
double[] X = new double[N+1]; X[0] = .5;
for (int n = 1; n <= N; n++) X[n] = r(Seq,n-1,a,b)*X[n-1]*(1-X[n-1]);
// calculate the Lyapunov exponent (to a certain precision dictated by N)
double lmb = 0;
for (int n = 1; n <= N; n++)
lmb += Math.log(Math.abs(r(Seq,n,a,b)*(1-2*X[n])))/Math.log(2);
lmb /= N;
// infinity was hit, use a black pixel
if (Double.isInfinite(lmb)) return 0x000000;
// color pixel according to Lyapunov exponent
double MIN = -1, MAX = 2;
if (lmb < 0) return ((int)(lmb/MIN*255))<<8;
else return ((int)(lmb/MAX*255))<<0;
}

Coming back to Wikipedia’s algorithm, there were a few things I found irritating at best when attempting my implementation and thus addressed in the algorithm description seen above. A closer look at potentially misleading or ambiguos statements follows, together with my reasoning to resolve them.

- It is not clear whether the sequence is zero or one indexed, though it has to be zero indexed as , evaluates to , implying the definition of and thus .
- It is not clear which logarithm is meant; the natural logarithm, a generic or the dyadic logarithm — the latter is actually used. To find the actual logarithm base, I dug deeper and used Wikipedia’s external links to find a post by Earl Glynn, answering this question.
- It is not clear what is meant by the second half of the sentence beneath the Lyapunov exponent. It reads
*“… dropping the first summand as for .”* As the sum starts with 𝑛 = 𝟣 and one would not dare to calculate for , this sentence’s existence bewilders me. - It is not clear how exactly the colors are derived, only that they have something to do with the Lyapunov exponent. I simply chose arbitrary values that looked convincing for mapping 𝜆 to a pixel color.

One of the most frustrating *bugs* I came across was an unexplainable axis flip. My code generated the fractal just fine except for the fact that every image was flipped along the diagonal crossing the origin with a 𝟦𝟧° angle to the horizontal. It was as though the coordinates were swapped somewhere in my code and I simply could not figure out where.

Finally, after hours of looking at the same code over and over again, a closer look at Earl Glynn’s post brought an end to my misery. Just below the three welcoming fractal renderings, a screenshot of their software is shown — complete with a blue and red line indicating the coordinate system’s orientation. 𝑎 and 𝑏 are — contrary to all coordinate systems involving parameters named after the first two letters in the latin alphabet — indeed flipped. Wikipedia’s images must have simply ran with this decision, without noting it.

Because of this flip, when one wants to render images specified in the reversed sequence format, they simply have to swap all letters (for example becomes ).

As Glynn themself says, *“I would have reversed the ‘a’ and ‘b’ labels to be consistent with normal ‘x’ and ‘y’ axes conventions, but conform here with the same convention as used by Markus.”* (Referring to Mario Markus, co-author of *Lyapunov Exponents of the Logistic Map with Periodic Forcing*.)

After having eliminated all vagueness regarding the fractal generation, writing the outer Java viewer hull was a fairly easy task. As a template I took my Mandelbrot Set III viewer (the reason why most variable names reference complex numbers) complete with multithreading, allowing a pixely fractal exploration until one stops their exploration, letting the thread catch up and display a higher resolution image. The same is done for rendering 𝟦K images and saving them to files in the background — 𝟦K renderings are saved as `.png`

files and named according to their parameters. A list of program controls follows.

## Controls

- Mouse dragging lets one pan the complex plane.
- Mouse scrolling lets one zoom the complex plane.
- Pressing
*N* evokes a dialogue box where one can specify an iteration depth 𝑁. - Pressing
*S* evokes a dialogue box where one can enter a sequence . - Pressing
*R* resets all fractal parameters to the default. - Pressing
*P* initiates a 𝟦K fractal rendering. The 𝟦K fractal rendering thread can be killed by exiting the program prematurely, thus losing (!) the rendering.

2017-12-25, post № 188

**art**, **haiku**, **poetry**, #fractal, #Mandelbrot set

Barren trees, white snow.

Cold and lasting winter nights.

Quiet fire crackling.

2017-12-16, post № 187

**mathematics**, **programming**, **Python**, #linear algebra, #matrices

Matrices are an important part of linear algebra. By arranging scalars in a rectangular manner, one can elegantly encode vector transformations like scaling, rotating, shearing and squashing, solve systems of linear equations and represent general vector space homomorphisms.

However, as powerful as matrices are, when actually applying the theoretical tools one has to calculate specific values. Doing so by hand can be done, yet gets cumbersome quite quickly when dealing with any matrices which contain more than a few rows and columns.

So, even though there are a lot of other implementations already present, I set out to write a Python matrix module containing a matrix class capable of basic matrix arithmetic (matrix multiplication, transposition, …) together with a set of functions which perform some higher-level matrix manipulation like applying Gaussian elimination, calculating the reduced row echelon form, determinant, inversion and rank of a given matrix.

Module source code can be seen below and downloaded. When saved as `matrix.py`

in the current working directory, one can import the module as follows.

>>> import matrix
>>> A = matrix.Matrix([[13, 1, 20, 18],
... [ 9, 24, 0, 9],
... [14, 22, 5, 18],
... [19, 9, 15, 14]])
>>> print A**-1
-149/1268 -67/634 83/1268 171/1268
51/1268 239/1902 -105/1268 -33/1268
Matrix( 73/634 803/4755 -113/634 -87/3170 )
13/1268 -75/634 197/1268 -83/1268

Matrices are defined over a field, typically in theoretical use, though for my implementation I chose not to use a `double`

data structure, as it lacked the conceptual precision in numbers like a third. As one cannot truly represent a large portion of the reals anyways, I chose to use , which also is a field though can be — to a certain scalar size and precision — accurately represented using fractional data types (Python’s built-in `Fraction`

is used here).

To simplify working with matrices, the implemented matrix class supports operator overloading such that the following expressions — `A[i,j]`

, `A[i,j]=l`

, `A*B`

, `A*l`

, `A+B`

, `-A`

, `A/l`

, `A+B`

, `A-B`

, `A**-1`

, `A**"t"`

, `~A`

, `A==B`

, `A!=B`

— all behave in a coherent and intuitive way for matrices `A`

, `B`

, scalars `l`

and indices `i`

, `j`

.

When working with matrices, there are certain rules that must be obeyed, like proper size when adding or multiplying, invertibility when inverting and others. To minimize potential bug track down problems, I tried to include a variety of detailed exceptions (`ValueErrors`

) explaining the program’s failure at that point.

Apart from basic matrix arithmetic, a large part of my module centers around Gaussian elimination and the functions that follow from it. At their heart lies the implementation of `GaussianElimination`

, a function which calculates the reduced row echelon form `rref(A)`

of a matrix together with the transformation matrix `T`

such that `T*A = rref(A)`

, a list of all matrix pivot coordinates, the number of row transpositions used to achieve row echelon form and a product of all scalars used to achieve reduced row echelon form.

From this function, `rref(A)`

simply returns the first, `rrefT(A)`

the second parameter. Functions `rcef(A)`

(reduced column echelon form) and `rcefS(A)`

(`A*S = rcef(A)`

) follow from repeated transposition.

Determinant calculation uses characteristic determinant properties (multilinear, alternating and the unit hypercube has hypervolume one).

Using these properties, the determinant is equal to the product of the total product of all factors used in transforming the matrix into reduced row echelon form and the permutation parity (minus one to the power of the number of transpositions used).

Questions regarding invertibility and rank can also conveniently be implemented using the above described information.

All in all, this Python module implements basic matrix functionality paired with a bit more involved matrix transformations to build a usable environment for manipulating matrices in Python.

2017-12-02, post № 186

**ImageMagick**, **PIL**, **programming**, **Python**, #animation, #automation, #cellular, #PPCG, #quantum mechanics

A recent PPCG challenge titled The Quantum Drunkard’s Walk was about a tiny drunken person for which quantum mechanics apply and who — being drunk — will randomly walk in one of the four cardinal directions each step they take.

As they experience the strange world of quanta, they can simultaneously take every path at once and their paths influence each other. The challenge then asked to output all possible positions the quantum drunkard could occupy, all paths they could take in ASCII representation.

The question also states this problem’s equivalence to a cellular automaton, when one removes the story boilerplate.

Cells in this cellular automaton are square and can occupy one of three states: empty, awake or sleeping. Each iteration, all cells change according to three rules.

- An empty cell wakes up iff there is exactly one awake cell amongst its cardinal neighbours, else it stays empty.
- An awake cell goes to sleep.
- A sleeping cell continues to sleep.

Being code golf, the aim was to come up with a minimally sized source code; my attempt required 𝟤𝟣𝟦 bytes and prints a nested array containing one-length strings (characters), as this output method is cheaper than concatenating all characters to one string.

`python quantum.py`

` -rmi 200`

However, one user took the challenge idea a bit further and created an animated gif showing the walk’s, or cellular automaton’s, progression over time with an increasing number of iterations. My Python program shown in this post does exactly that, generating an animated gif showing the automaton’s progression. I even implemented rainbow support, possibly improving the automaton’s visual appearance.

Python source code can be downloaded and seen below.

I use the Python Imaging Library to produce all frames and use a shell call to let ImageMagick convert all frames to an animated gif. Animation parameters are taken via shell arguments, a full list of features follows (also available via the `-h`

flag).

`--iterations N`

: Number of iterations (initial frame not counted)

`--colorempty C`

: Empty cell color (#rrggbb)`--colorawake C`

: Awake cell color (#rrggbb)`--colorsleeping C`

: Sleeping cell color (#rrggbb)`--rainbow`

: Use rainbow colors (overrides color settings)`--scale N`

: Cell square pixel size

`--convert`

: Execute ImageMagick’s convert command`--delay N`

: Delay between frames in output image.`--loop N`

: Gif loops (0 means for indefinite looping)`--keepfiles`

: Do not delete files when converting

`python quantum.py -s 25 -md 50`

2017-11-18, post № 185

**art**, **haiku**, **poetry**,

A tiny spark flies;

Ignites a crumbled paper.

Thus fire is born.

2017-11-04, post № 184

**C**, **programming**, **Python**, #recreational

A polyglot — coming from the Greek words πολύς (many) and γλώττα (tongue) — is a piece of source code which can run in multiple languages, often performing language-dependent tasks.

Even though such a source code’s feature may not seem particularily useful or possible with certain language combinations, trying to bend not one but multiple language’s syntactic rules and overall behavior is still interesting.

An example of a rather simple polyglot would be a Python 2 / Python 3 polyglot — if one counts those as two separate languages. Because of their enormous similarities, one can pick out differences and use those to determine which language runs the source code.

if 1/2>0:print("Python 3")
else:print("Python 2")

Utilizing Python’s version difference regarding integer division and real division is one of the most common ways to tell them apart, as it can also be used to only control a specific part of a program instead of having to write two nearly identical programs (increases the program’s style and cuts on bytes — an important consideration if one golfs a polyglot).

However, polyglots combining languages that are not as similar as Python 2 and Python 3 require more effort. The following is a general Python 2 / C polyglot, meaning that nearly all C and Python 2 programs can be mixed using this template (there are a few rules both languages need to abide which will come apparent later).

#define _\
"""
main(){printf("C");}
#define _"""
#define/*
print"Python 2"
#*/_f

In the above code, `main(){printf("C");}`

can be nearly any C code and `print"Python 2"`

can be nearly any Python 2 code.

Language determination is exclusively done via syntax. A C compiler sees a `#define`

statement and a line continuation `\`

, another two `#define`

statements with a block comment in between and actual compilable C source code (view first emphasis).Python, on the other hand, treats all octothorps, `#`

, as comments, ignoring the line continuation, and triple-quoted strings, `"""..."""`

, as strings rather than statements and thus only sees the Python code (view second emphasis).

My first ever polyglot used this exact syntactical language differentiation and solved a task titled Life and Death of Trees (link to my answer).

2017-10-31, post № 183

**art**, **poetry**, #dark, #iambic pentameter, #poem, #Shakespearean sonnet, #sonnet

Gentle whispers wake my worldly being,

for long have I been laying here asleep.

My eye I open, though without seeing;

a world possessed by darkness — far and deep.

I want to try and move but must I not.

An every thought consumed by this abyss.

I try unraveling this psychic knot,

though does this fight seem to be meaningless.

Alas, before my mind is swallowed whole,

a square of light appears in front of me.

A cat inside the light; it wants my soul,

strips me of my heart so I will not be.

Without its core, my outer shell lies here,

the inner power used to fuel all fear.

-- Jonathan Frech, 30th of October 2017

Manuscript.

2017-10-21, post № 182

**C**, **programming**, #bitmap, #file, #file format

C is one cool and important language. CPython and Unix are based on it, the Mars Curiosity rover is run by it and even the GCC C compiler itself is written in C. However, as C is some years old by now, it lacks a lot of higher-level features most modern languages possess, being more *down to the silicon*, as the cool kids say. Concepts like pointer manipulation, bit fiddling and its string implementation — just to name a few — are at times cumbersome, insecure and error-prone; nevertheless is there a certain appeal to writing in C.

Being only one abstraction level away from Assembly — which itself is only one abstraction level above raw byte code — and having access to file manipulation down to the individual bit, I set out to write a Microsoft Bitmap (`.bmp`

) implementation in pure C. As Microsoft’s standard for this image file format is quite feature-rich, I decided to focus on the bare minimum — a bitmap with 𝟤𝟦-bit color depth (three colors, one byte per), one color plane, no compression, no palette and 𝟥𝟢𝟢 DPI.

My Bitmap implementation supports both reading and writing `.bmp`

files, as well as generating some test images — including a Mandelbrot set fractal renderer, of course. Implementation source code can be downloaded (`bmp.c`

) or seen below.

A Mandelbrot set fractal rendering.

Implementing a file format requires knowing its specification. Although it is not the best article I have ever seen, this Wikipedia article gave me some insights. The missing pieces were reverse engineered using Adobe Photoshop CC and the HxD hex editor.

The following is a snippet of the implementation’s `savebmp`

function (full source code listed below). It illustrates the Bitmap file’s byte layout only showing the file header, omitting a lengthy data part concatenated to the header. `S`

, `K`

, `W`

, `H`

and `B`

are all byte arrays of length four (little-endian format) which contain the file’s total size, the bitmap data offset (which is constant, since the header is always exactly 𝟧𝟦 bytes large), the image’s dimensions (horizontal and vertical) and the bitmap data’s section’s size, respectively.

/* bitmap file header */
0x42, 0x4D, // BM
S[0], S[1], S[2], S[3], // file size
0x00, 0x00, 0x00, 0x00, // unused
K[0], K[1], K[2], K[3], // bitmap data offset
/* DIB header */
0x28, 0x00, 0x00, 0x00, // DIB size
W[0], W[1], W[2], W[3], // pixel width
H[0], H[1], H[2], H[3], // pixel height
0x01, 0x00, // one color plane
0x18, 0x00, // 24 bit color depth
0x00, 0x00, 0x00, 0x00, // no compression
B[0], B[1], B[2], B[3], // bitmap data size
0x23, 0x2E, 0x00, 0x00, // 300 DPI (horizontal)
0x23, 0x2E, 0x00, 0x00, // 300 DPI (vertical)
0x00, 0x00, 0x00, 0x00, // no palette
0x00, 0x00, 0x00, 0x00 // color importance
/* data bytes follow */

Key bytes to note are the first two identifying the file type (the ASCII-encoded letters `BM`

) and the DPI bytes, `0x23`

, `0x2E`

, which indicate `0x00002E23 = 11811`

pixels per meter in both the horizontal and vertical direction. Converting from pixels per meter to dots per inch results in `11811 / (1 meter / 1 inch) = 11811 * 127 / 5000 = 300`

DPI (roughly).

Most values are represented using four bytes in little-endian format. Translating an 𝟥𝟤-bit integer into four little-endian formatted bytes can be achieved as follows.

/* unsigned 32-bit integer */
unsigned int n = 0b10100100010000100000100000010000;
/* < m sig><sm sig><sl sig>< l sig> */
/* byte (unsigned char) array of size four */
unsigned char N[4] = {
(n & 0xff000000) >> 0, // most significant byte
(n & 0x00ff0000) >> 8, // second most significant byte
(n & 0x0000ff00) >> 16, // second least significant byte
(n & 0x000000ff) >> 24 // least significant byte
};

Other than rendering a fractal, I also implemented three nested loops which output an image containing every possible color exactly once (`(2**8)**3 = 16777216`

pixels in total).

All sixteen million colors in one image.

An image’s data type is implemented as a `struct image`

which contains three variables — `width`

and `height`

, two integers specifying the image’s dimensions, and `*px`

, a pointer to an one-dimensional integer array of size `width*height`

which holds the entire image data.

Defined functions are listed ahead.

`image * readbmp(char []);`

Reads an image specified by a file name. If reading fails, a `NULL`

pointer is returned.`void savebmp(image *, char []);`

Saves given image to a file with specified name.

`image * newimage(int, int);`

Returns a pointer to an image struct with specified dimensions (image will be filled with nothing but black pixels).`void freeimage(image *);`

Frees an image struct’s memory.`int getpx(image *, int, int);`

Returns the pixel color at specified coordinates.`void setpx(image *, int, int, int);`

Sets the pixel color at specified coordinates.`void fill(image *, int);`

Fills a given image with a given color (all pixels are set to specified color).

`int color(byte, byte, byte);`

Returns a 𝟥𝟤-bit integer representing a color specified by three bytes (`byte`

is defined through `typedef unsigned char byte;`

(.`int hsl(double, double, double);`

Returns a 𝟥𝟤-bit integer representing a color specified by three doubles in the HSL color format.

*Images shown in this post were converted to *`.png`

files as WordPress does not allow `.bmp`

file uploads; the raw pixel data should, however, be identical.

Posts:

236-226, 225-215, 214-204, 203-193, **192-182**, 181-171, 170-160, 159-149, 148-138, 137-127, 126-116, 115-105, 104-94, 93-83, 82-72, 71-61, 60-50, 49-39, 38-28, 27-17, 16-6, 5-1