This is a little challenge to test the low-level expressive power of your favorite programming language:
Task 1: Write a function which takes a floating point value (double-precision if your language makes the distinction) and prints to stdout its internal binary representation (that is, a "1" for each set bit and a "0" for each unset bit).
Task 2: Expand the function to support any type, not just floating point types.
In C++ the first task is rather easy. Here's a commented solution:
#include <iostream>
#include <climits>
void printBinaryRepresentation(double value)
{
// Resolve if this is a big-endian or a little-endian system:
int dummy = 1;
bool littleEndian = (*reinterpret_cast<char*>(&dummy) == 1);
// The trick is to create a char pointer to the value:
const unsigned char* bytePtr =
reinterpret_cast<const unsigned char*>(&value);
// Loop over the bytes in the floating point value:
for(unsigned i = 0; i < sizeof(double); ++i)
{
unsigned char byte;
if(littleEndian) // we have to traverse the value backwards:
byte = bytePtr[sizeof(double) - i - 1];
else // we have to traverse it forwards:
byte = bytePtr[i];
// Print the bits in the byte:
for(int bitIndex = CHAR_BIT-1; bitIndex >= 0; --bitIndex)
{
std::cout << ((byte >> bitIndex) & 1);
}
}
std::cout << std::endl;
}
Enhancing the function to support any type is really easy too:
template<typename Type>
void printBinaryRepresentation(const Type& value)
{
// Exact same contents here as above,
// but changing sizeof(double) to sizeof(Type)
}