The computer stands at the pinnacle of a long line of inventions on how we communicate an idea from one mind to another. The ability to share thoughts and ideas—beyond verbal communication—can be traced back to prehistoric cave paintings. After all, what is art, if not the act of sharing information visually?

Throughout thousands of years, we shared knowledge and information through hieroglyphs, pictograms, cuneiform language, all the way to the modern alphabet, books, the printed press, morse code, the telegraph, the telephone, fax machines, and, finally, the computer.

With every invention, we increased the speed at which we do it. From a single hand-written papyrus traveling by foot for weeks to printed books shipped by sea for weeks, today, we can send the entire works of Shakespeare from LA to Japan in less than a second.

I’m sharing all this because I think it’s worth defining what a computer does is in the context of information. With the computer, we can break down any thought, idea, or work of art into a pattern of electrical signals, store it, and send it across the globe at the speed of light.

A unique invention stands at the heart of this pattern—the fundamental particle of information—the bit. Technically, the bit is an electrical signal stored in a tiny piece of technology called a transistor. Each transistor holds a single bit, 8 bits making 1 byte; Apple’s new M1 chip can store up to 57 billion transistors.

This electrical signal is stored at a higher or lower voltage, which results in a binary sequence, 0 for lower voltage and 1 for higher voltage. With this two-state bit, we can represent just about anything. Take the simplest of text messages:

Hello is made of 5 individual characters. It’s been agreed—as per the American Standard Code for Information Interchange protocol (ASCII)—each letter maps to a decimal number such that:

Whereas a letter standing for a sound is ambiguous to represent, a number is more concrete. A number is a unit, and units can be counted and expressed in bits.

Take a single bit. If the current coming in is low, we can equate that to a binary 0. Increase the electrical charge so that the transistor can detect it and get yourself a binary 1. With a single binary bit, we can represent the decimal numbers 0 and 1. What if we stick two bits together? We double the number of decimal numbers we can represent with two consecutive bits.

Now, let’s scale things up to 8 bits.

Well, 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255. With 1 byte (8 bits) we can now represent 255 decimal numbers. We can undoubtedly count from 0 to 72 (back to the H in Hello) with 8 bits.

Scale that up to the entire word, and you have the bit pattern of zeros and ones for Hello. Thus we can send the text Hello from LA to Japan in electrical form at the speed of light—186000 miles per second.

Now, why have we gone to such lengths with the mechanics of a single word? Because this is the essence of the computer, a vast ocean of bits that can flip between 0 and 1 billions of times per second.

A text message, a photo, a book, a song, everything in a computer is built out of patterns of bits, of electricity. Images on screen are constellations of pixels of color, each a certain amount of red, green, and blue, each ranging from 0 to 255, which are patterns of 0s and 1s.

Everything in our digital life arises from long and fast-changing patterns of electricity, and the bit is such a fundamental unit that some believe it stands at the root of our ability to understand the universe.

John Archibald Wheeler, the last surviving collaborator of both Einstein and Bohr, put this manifesto in oracular monosyllables: “It from Bit”. Information gives rise to “every it—every particle, every field of force, even the spacetime continuum itself.”
This is another way of fathoming the paradox of the observer: that the outcome of an experiment is affected, or even determined when it is observed. Not only is the observer observing she is asking questions and making statements that must ultimately be expressed in discrete bits.
“What we call reality,” Wheeler wrote coyly, “arises in the last analysis from the posing of yes-no questions.” He added: “All things physical are information-theoretic in origin, and this is a participatory universe.” The whole universe is thus seen as a computer—a cosmic information-processing machine.

— The Information, James Gleick

Everything is data. From the collection of characters in a name, to the digits in a phone number, and all the way to images, videos, websites, or video calls on zoom.

The ability to express data is, therefore, the first big idea in a programming language. A primitive expression, as we call it, can be as simple as a number, a word, a decimal like pi, or even something as elementary as a truth or a falsehood. What makes an expression is that it evaluates—or adds up—to some final thing the computer can store in memory. Primitive expressions, such as these below, simply evaluate to themselves. That's what makes them primitive.

But what do we mean by evaluate? Remember, a computer records data by imprinting electrical signals onto a physical piece of memory. Take, for example, the number 33 that we express using the decimal number system.

To store it, the computer will make electrical marks in memory slots. In other words, the computer has to evaluate the number to its binary equivalent, 00100001.

The same applies to something a little more sophisticated like a word. The name Frank, for example, is a collection of letters such as F, r, and a, which are expressed at a lower level by the numbers 46, 72, and 61, which in turn boil down to 001... you get the point. We express what the computer can evaluate.

Expressions can also be combined, this is the second big idea. Take the simple additions below. We use the plus symbol or the add word, its equivalent in function notation, to combine two primitive expressions together. The numbers 3 and 2 no longer evaluate to themselves individually but to 5, their sum. Not only have we combined two primitive expressions, but we also gave the computer our first instruction: add two numbers together and evaluate the result.

It all seems simple and straightforward, right? Well, not so fast. There's something here that we're taking for granted. Adding two numbers together with the + symbol might seem trivial to us, humans, but when you consider that a computer is just electricity, how is it that it knows what to do with the symbol + or the word add ?

Somewhere in your computer, a programmer left a series of instructions on how to add numbers together in binary and then return the result in decimal notation. These instructions are packaged in a "little container" with the name +. This so we get to type + without having to think about how to add two numbers together using nothing but electricity.

This is the third and most powerful idea in a programming language: the ability to abstract; to make large collections of instructions available simply by their name. Just like when we drive a car there are mechanisms integral to our driving that have been abstracted away in the form of icons on our dashboard or pedals under our feet, so it is with computers and code.

Previously complex collections of instructions are made primitive to us in a "higher level" programming language. We get to stand on the shoulders (and code) of others and use their building blocks to make our own. We can, for example, combine a series of numbers and operators to recreate the process of converting temperatures from Celsius and Fahrenheit degrees. We can do so using infix notation:

Or using function notation—slightly more complicated, at first, but just the same as the notation in the slide above:

Finally, just like the programmers who created the instructions in the + symbol, we now get to abstract our instructions and give them a name of our own:

Only to arrive at where we started.

In short, a programming language is a series of symbols, keywords, and "glyphs", each standing for instructions at a lower level, which in turn stand for ever deeper collections of instructions until we're left with nothing but literal electricity.

In the beginning, most of us learn to code by writing a simple program that greets the world. Some do it in an older programming language like Lisp or C, others in a more modern language like Python, Javascript, or Swift. Much is said as to which is the best language to learn, but they’re not actually all that different.

Programming isn't really about the language you do it in.

Computers—people who spent their day literally counting for the fields of mathematics, engineering, and navigation—emerged first in the 1600s as a job title. As calculations grew larger, machines were invented to do the job faster and with less mistakes.

A notable example of this was the tabulating machine, invented in response to the US Government's need to develop a more efficient way to count the Census of 1890. Counting the Census was estimated to take more than a decade to complete, but with this new technology, it was completed in just over two; this was huge at the time.

Eventually, the modern computer was born. A more general-purpose device that a programmer would use to input a set of values and instructions, to output a result. With time, these modern computers grew in sophistication. We invented ways to represent more facets of our daily lives—the alphabet, words, floating-point numbers (like 3.45), images, videos, all the way to modern day 3D games, and voice interfaces like Siri—all with a simple zero and a one.

One after another, these inventions led to the computers we use today, and even though our phones and laptops are a lot more sophisticated than their predecessors, their essence remains the same: Input data, process it, output the result.

Type in an email, hit send, output an email in the recipient's inbox. Type in an essay, hit export, output a PDF document. Upload a photo, hit post, output an Instagram post.

Programming is about the process that takes place between input and output. Just like we use common language to express thought, and mathematics to deal with quantities and measurements, we use code to describe how to do something.

Do programming languages play a role? Absolutely. Different languages have been created over the years that focus on different aspects of a computer, but the essential building blocks of programming exist in all of them.

Programming is the art of taking a complex challenge, thinking creatively about how to break down, and ultimately building a process back up that solves it. What we can do with this creative thinking extends as far and wide as the frontiers of our knowledge and imagination.

Programming typically begins with a programming language, but let's first consider what a computer is.

Take for instance, a table—what is a table? It's where you share a conversation, work on a project, eat a meal, write a story, keep books you're reading, leave keys when you come home, stack letters you haven't yet open.

But a table isn't what we do on it. A table is a multi-layered object made of wood; that wood, an intricate pattern of fibers; those fibers an intricate structure of molecules, atoms, and eventually, pure energy. Your table is all these things.

Your computer is just like the table. The folders and files on your desktop are like the binders and papers on your desk, the books on your Kindle are like the books on your table, that word document you have open is the digital version of your notebook.

The objects on your computer seem so real that the computer, itself, has become invisible. Instead, you see text, images, favorites, todos, emails, work assignments, websites and the people you interact with.

Learning to code is to strip away the objects and see the computer for what it really is. Like the table, the computer is made of many layers. An image isn't quite like an analog photo , a movie isn't quite like the movies of the old days. These digital objects are collections of numbers that your computer turns into the visual experience you are familiar with on screen. From the words on a word document, to a movie on YouTube, or a conversation on FaceTime everything is numbers; everything in your computer is data.

Take a movie, for example. A movie is made of moving images, and an image is made of squares of color, but what makes up a color—say, orange? Yellow and red (obviously), but a computer doesn't process color like we do. On a screen, all colors are made of varying degrees of red, green, and blue, which are represented by a range of numerical values between 0 and 255. What we perceive as orange is 243 red, 83 blue, 45 green. Therefore, what for us is a movie, for a computer is billions of numbers.

Take another example, your desktop. In the same way that numbers represent a movie, your desktop stands for a series of internal processes and programs. Your cursor and your folder (seemingly separate objects), aren't in actuality separate at all.

It doesn't end there. Code might seem like the end of the line, but it too functions as a human-friendly representation of lower level machine instructions; all of which can eventually be reduced to ones and zeros, the basic expression of an electrical signal.

Nothing is quite what it seems.

I recently stumbled upon an old project I forgot I built. As a personal exercise, I coded a basic replica of Apple's iPhone calculator, in Javascript, from scratch. I'm posting it here for my own reference: the live calculator and the Gitbhub repository.

Functional programming is an idea, a way of approaching programming, that borrows from mathematics and its idea of what a function is.

In computer science, a function can be defined as a bundle of code that does something—it mutates a data collection, it updates a database, it logs things onto the console, etc. If we want, we can even make it do many of these things at once. A function, in computer science, is a set of procedures that get given a name and can be passed around and invoked when needed.

In mathematics a function has a stricter definition: a function is a mapping between an input and an output. It does one thing, and one thing only, and no matter what you give it, it always produces the same result. In addition to this mapping, the function never mutate sthe input. It produces the output based on what we pass it.

What functional programming is—at a high level—is the use of the mathematical definition of a function in computer programming. In functional programming, we reduce a problem to small single-purpose functions that we can assemble together like LEGO blocks. This can be boiled down to three core principles: 1) A function will always only look at the input; 2) A function will always produce an output; 3) All data structures are immutable.

The beauty here is that given, say, a collection of numbers, we can run it through a very complex set of functions and still be sure that our data remains exactly the same in the end.

The function only mutates values inside its scope, but anything coming from the outside remains the same.

In functional programming, there’s an emphasis on clarity, both syntactical and of purpose. Each block has one purpose and nothing else. We don’t need to understand the function in order to use it. We call it and, no matter how complex its procedures, it should always produce the same output.

The benefit is that each function can be made and tested in isolation since it does just one thing. And over time, the function can be optimized and made a lot better without it ever impacting the code where it is called. But, in a world of pure functions, there's still a need to bridge into the real and more messy world of side-effects. These are anything from logging to the console, writing to a file, updating a database, or any external process. The key here is to separate all code that produces side-effects from the pure logic of a program and isolate them.

Lastly, with functional programming, there is an incessant creation of copies of the same data, given that functions do not modify their input. This is problem has been solved by persistent data structures.

My learnings for this post came from here and here.

All data in a computer is stored through a binary electrical system – binary as in bi, two. The bit, the computer’s unit of data, is expressed through an electrical signal or the lack thereof. This signal is managed by a transistor, a tiny switch that can be activated by the electrical signals it receives. If the transistor is activated, it conducts electricity. This creates an electrical signature in the computer's memory equivalent to a 1 or a truth. Otherwise, the lack of signal is equivalent to a 0 or a false.

The basis of this binary system, as we have it today, was first introduced by Leibnitz in 1689, as part of an attempt to develop a system to convert verbal logic into the smallest form of pure mathematics. It is said Leibnitz was actually influenced by the i-Ching 🤯 and was attempting to combine his philosophical and religious beliefs with the field of mathematics. Together with George Bool’s work in logic and MIT’s Claude Shanon paper relating them to computing, this was basis for the simple and yet incredibly ingenious system behind today’s digital computer.

There have been ternary and even quinary electrical systems developed in the field of computing. But the more complex the system, the harder it is to tell the difference between different types of voltage; specially when the computer is low on battery or its electrical system interfered with by another device (i.e. a microwave). So the world settled on binary, the simplest and most effective system. The voltage is either there or not.

That's how we get zeros and ones: electricity.

I have often reverted to Googled to get a quick random number generating function. Being a little dyslexic, I'd get all confused with the max - min + 1 + min portion of the function. Well, today is the day I untangle the mins and maxes. In essence, the random function in Javascript’s Math object returns a quasi-random floating point number between zero and one.

If I were looking for a number between 0 and 9, I could simply shift that comma by 1 decimal place by multiplying it by 10.

To make the 10 inclusive, I could increase it by 1; or simply multiply it by 10 + 1. This would increase the range of possible random numbers from 0-9 to 0-10.

What this means, is that I'm multiplying the result of the random function by the range of possible numbers I'm looking for, and adding one so as to make it inclusive.

To get a random number between 0 and 75, I can:

What if I want a number between a minimum and a maximum, say between 25 and 150? There are two parts to the process. First, I need to determine the range of numbers I want my number to be in between of — that is the range of numbers between 25 and 150. That can be achieved by subtracting 25 from 150. I'm therefore looking for one random number out of 75 possible numbers.

Then, I want my possible random number to be in between 25 and 150. To get one of 75 random numbers that start at least at 25, all I have to do is add 25 to my random number. 🤯

In essence, this is a random number multiplied by a range of numbers and bumped up by the starting point number.

Finally, the result can be rounded down to the lowest nearest integer; and that’s how you get a damn random number between 25 and 150. See function in Javascript below:

I’ve had a somewhat liberating epiphany recently. The methods built into a programming language can be written from scratch using primitive building blocks like if-else statements and loops. Built-in methods exist to bundle complicated procedures behind one simple interface; but they're simply solutions to common problems so a programmer doesn’t have to write them over and over again. Programming is problem solving, whether I use complex or simple tools.

It's the same in design. There are many of nuts and bolts to every tool. Sketch and Figma are filled with smart details meant to make a designer’s life easier. But I also know, by virtue of my experience, that all I need is a blank canvas, the rectangle tool, type and color. Tools are helpful, but the work happens in thinking about and experimenting on a problem enough that eventually a solution starts to emerge—regardless of the tool used.

To concretize this, I wrote my own version of Javascript's splice() method. I’m sure my algorithm could be made better, cleaner, faster, and more efficient. But what a fun experience to realize, in practice, that a method like splice is really just a beautiful function, like my own functions.

Splice is a robust method. With one single line of code, I can shorten an array, remove items at specific index positions, or even insert multiple new items at a location. It works in place and therefore on the array itself.

In my own version of splice, I built a couple of dedicated methods to perform each major procedure. Things like shortening an array, deleting an item(s) at a particular location, and inserting as many elements as passed onto the function sequentially into the array.

A method to shorten the array:

Methods to delete an item(s):

A method to insert an item(s):

Finally, they all came together as a single splice method with a nice O(n) asymptotic complexity. Like in Javascript’s original splice, my splice method takes in as many arguments as needed, and based on that updates its behavior internally with no outside input.

All in all, lot’s to learn – but that was fun.

One of the proverbial things to understand in programming is the mouthful of why when we let a = 1, and we let b = a, and then we change a = 5, why b is still 1. In an attempt to clarify this, I created a one-sheet visualization of the matter at hand.

In essence, primitive values (i.e. things like strings, numbers, and booleans) are stored by storing the value. This means that the actual value is stored inside the variable. So when I tell the computer to store a in b, I’m not storing a link from b to a, but a copy of the value originally stored in a.

More complex values (i.e. things like arrays or objects, or in lay terms, collections of primitive values) are stored by storing the reference to the value. This means that what gets stored in the variable is a reference to the location in memory where the data is stored.

Javascript is now the fourth programming language I’m coming into contact with. I started by learning about how computers work in C; I learned to program in Python and Swift. Javascript is maybe the language I’m now learning technologies with.

One of the cool things I’m experiencing as I learn into a new language is how much easier it is to get started. Even though each language can have its own purpose and syntactic idiosyncrasies, they all share the same principles. Recursion is recursion is recursion no matter the language.

So, this time I took a non-linear approach to learning the nuts and crannies of Javascript and compiled the main things I wanted to retain in a single one-sheet.