Francisco
Pax
Info

Everything is data. From the collection of characters in a name, to the digits in a phone number, and all the way to images, videos, websites, or video calls on zoom.

The ability to express data is, therefore, the first big idea in a programming language. A primitive expression, as we call it, can be as simple as a number, a word, a decimal like pi, or even something as elementary as a truth or a falsehood. What makes an expression is that it evaluates—or adds up—to some final thing the computer can store in memory. Primitive expressions, such as these below, simply evaluate to themselves. That's what makes them primitive.

But what do we mean by evaluate? Remember, a computer records data by imprinting electrical signals onto a physical piece of memory. Take, for example, the number 33 that we express using the decimal number system.

To store it, the computer will make electrical marks in memory slots. In other words, the computer has to evaluate the number to its binary equivalent, 00100001.

The same applies to something a little more sophisticated like a word. The name Frank, for example, is a collection of letters such as F, r, and a, which are expressed at a lower level by the numbers 46, 72, and 61, which in turn boil down to 001... you get the point. We express what the computer can evaluate.

Expressions can also be combined, this is the second big idea. Take the simple additions below. We use the plus symbol or the add word, its equivalent in function notation, to combine two primitive expressions together. The numbers 3 and 2 no longer evaluate to themselves individually but to 5, their sum. Not only have we combined two primitive expressions, but we also gave the computer our first instruction: add two numbers together and evaluate the result.

It all seems simple and straightforward, right? Well, not so fast. There's something here that we're taking for granted. Adding two numbers together with the + symbol might seem trivial to us, humans, but when you consider that a computer is just electricity, how is it that it knows what to do with the symbol + or the word add ?

Somewhere in your computer, a programmer left a series of instructions on how to add numbers together in binary and then return the result in decimal notation. These instructions are packaged in a "little container" with the name +. This so we get to type + without having to think about how to add two numbers together using nothing but electricity.

This is the third and most powerful idea in a programming language: the ability to abstract; to make large collections of instructions available simply by their name. Just like when we drive a car there are mechanisms integral to our driving that have been abstracted away in the form of icons on our dashboard or pedals under our feet, so it is with computers and code.

Previously complex collections of instructions are made primitive to us in a "higher level" programming language. We get to stand on the shoulders (and code) of others and use their building blocks to make our own. We can, for example, combine a series of numbers and operators to recreate the process of converting temperatures from Celsius and Fahrenheit degrees. We can do so using infix notation:

Or using function notation—slightly more complicated, at first, but just the same as the notation in the slide above:

Finally, just like the programmers who created the instructions in the + symbol, we now get to abstract our instructions and give them a name of our own:

Only to arrive at where we started.

In short, a programming language is a series of symbols, keywords, and "glyphs", each standing for instructions at a lower level, which in turn stand for ever deeper collections of instructions until we're left with nothing but literal electricity.

In the beginning, most of us learn to code by writing a simple program that greets the world. Some do it in an older programming language like Lisp or C, others in a more modern language like Python, Javascript, or Swift. Much is said as to which is the best language to learn, but they’re not actually all that different.

Programming isn't really about the language you do it in.

Computers—people who spent their day literally counting for the fields of mathematics, engineering, and navigation—emerged first in the 1600s as a job title. As calculations grew larger, machines were invented to do the job faster and with less mistakes.

A notable example of this was the tabulating machine, invented in response to the US Government's need to develop a more efficient way to count the Census of 1890. Counting the Census was estimated to take more than a decade to complete, but with this new technology, it was completed in just over two; this was huge at the time.

Eventually, the modern computer was born. A more general-purpose device that a programmer would use to input a set of values and instructions, to output a result. With time, these modern computers grew in sophistication. We invented ways to represent more facets of our daily lives—the alphabet, words, floating-point numbers (like 3.45), images, videos, all the way to modern day 3D games, and voice interfaces like Siri—all with a simple zero and a one.

One after another, these inventions led to the computers we use today, and even though our phones and laptops are a lot more sophisticated than their predecessors, their essence remains the same: Input data, process it, output the result.

Type in an email, hit send, output an email in the recipient's inbox. Type in an essay, hit export, output a PDF document. Upload a photo, hit post, output an Instagram post.

Programming is about the process that takes place between input and output. Just like we use common language to express thought, and mathematics to deal with quantities and measurements, we use code to describe how to do something.

Do programming languages play a role? Absolutely. Different languages have been created over the years that focus on different aspects of a computer, but the essential building blocks of programming exist in all of them.

Programming is the art of taking a complex challenge, thinking creatively about how to break down, and ultimately building a process back up that solves it. What we can do with this creative thinking extends as far and wide as the frontiers of our knowledge and imagination.

Programming typically begins with a programming language, but let's first consider what a computer is.

Take for instance, a table—what is a table? It's where you share a conversation, work on a project, eat a meal, write a story, keep books you're reading, leave keys when you come home, stack letters you haven't yet open.

But a table isn't what we do on it. A table is a multi-layered object made of wood; that wood, an intricate pattern of fibers; those fibers an intricate structure of molecules, atoms, and eventually, pure energy. Your table is all these things.

Your computer is just like the table. The folders and files on your desktop are like the binders and papers on your desk, the books on your Kindle are like the books on your table, that word document you have open is the digital version of your notebook.

The objects on your computer seem so real that the computer, itself, has become invisible. Instead, you see text, images, favorites, todos, emails, work assignments, websites and the people you interact with.

Learning to code is to strip away the objects and see the computer for what it really is. Like the table, the computer is made of many layers. An image isn't quite like an analog photo , a movie isn't quite like the movies of the old days. These digital objects are collections of numbers that your computer turns into the visual experience you are familiar with on screen. From the words on a word document, to a movie on YouTube, or a conversation on FaceTime everything is numbers; everything in your computer is data.

Take a movie, for example. A movie is made of moving images, and an image is made of squares of color, but what makes up a color—say, orange? Yellow and red (obviously), but a computer doesn't process color like we do. On a screen, all colors are made of varying degrees of red, green, and blue, which are represented by a range of numerical values between 0 and 255. What we perceive as orange is 243 red, 83 blue, 45 green. Therefore, what for us is a movie, for a computer is billions of numbers.

Take another example, your desktop. In the same way that numbers represent a movie, your desktop stands for a series of internal processes and programs. Your cursor and your folder (seemingly separate objects), aren't in actuality separate at all.

It doesn't end there. Code might seem like the end of the line, but it too functions as a human-friendly representation of lower level machine instructions; all of which can eventually be reduced to ones and zeros, the basic expression of an electrical signal.

Nothing is quite what it seems.

Functional programming is an idea, a way of approaching programming, that borrows from mathematics and its idea of what a function is.

In computer science, a function can be defined as a bundle of code that does something—it mutates a data collection, it updates a database, it logs things onto the console, etc. If we want, we can even make it do many of these things at once. A function, in computer science, is a set of procedures that get given a name and can be passed around and invoked when needed.

In mathematics a function has a stricter definition: a function is a mapping between an input and an output. It does one thing, and one thing only, and no matter what you give it, it always produces the same result. In addition to this mapping, the function never mutate sthe input. It produces the output based on what we pass it.

What functional programming is—at a high level—is the use of the mathematical definition of a function in computer programming. In functional programming, we reduce a problem to small single-purpose functions that we can assemble together like LEGO blocks. This can be boiled down to three core principles: 1) A function will always only look at the input; 2) A function will always produce an output; 3) All data structures are immutable.

The beauty here is that given, say, a collection of numbers, we can run it through a very complex set of functions and still be sure that our data remains exactly the same in the end.

The function only mutates values inside its scope, but anything coming from the outside remains the same.

In functional programming, there’s an emphasis on clarity, both syntactical and of purpose. Each block has one purpose and nothing else. We don’t need to understand the function in order to use it. We call it and, no matter how complex its procedures, it should always produce the same output.

The benefit is that each function can be made and tested in isolation since it does just one thing. And over time, the function can be optimized and made a lot better without it ever impacting the code where it is called. But, in a world of pure functions, there's still a need to bridge into the real and more messy world of side-effects. These are anything from logging to the console, writing to a file, updating a database, or any external process. The key here is to separate all code that produces side-effects from the pure logic of a program and isolate them.

Lastly, with functional programming, there is an incessant creation of copies of the same data, given that functions do not modify their input. This is problem has been solved by persistent data structures.

My learnings for this post came from here and here.

All data in a computer is stored through a binary electrical system – binary as in bi, two. The bit, the computer’s unit of data, is expressed through an electrical signal or the lack thereof. This signal is managed by a transistor, a tiny switch that can be activated by the electrical signals it receives. If the transistor is activated, it conducts electricity. This creates an electrical signature in the computer's memory equivalent to a 1 or a truth. Otherwise, the lack of signal is equivalent to a 0 or a false.

The basis of this binary system, as we have it today, was first introduced by Leibnitz in 1689, as part of an attempt to develop a system to convert verbal logic into the smallest form of pure mathematics. It is said Leibnitz was actually influenced by the i-Ching 🤯 and was attempting to combine his philosophical and religious beliefs with the field of mathematics. Together with George Bool’s work in logic and MIT’s Claude Shanon paper relating them to computing, this was basis for the simple and yet incredibly ingenious system behind today’s digital computer.

There have been ternary and even quinary electrical systems developed in the field of computing. But the more complex the system, the harder it is to tell the difference between different types of voltage; specially when the computer is low on battery or its electrical system interfered with by another device (i.e. a microwave). So the world settled on binary, the simplest and most effective system. The voltage is either there or not.

That's how we get zeros and ones: electricity.

I’ve had a somewhat liberating epiphany recently. The methods built into a programming language can be written from scratch using primitive building blocks like if-else statements and loops. Built-in methods exist to bundle complicated procedures behind one simple interface; but they're simply solutions to common problems so a programmer doesn’t have to write them over and over again. Programming is problem solving, whether I use complex or simple tools.

It's the same in design. There are many of nuts and bolts to every tool. Sketch and Figma are filled with smart details meant to make a designer’s life easier. But I also know, by virtue of my experience, that all I need is a blank canvas, the rectangle tool, type and color. Tools are helpful, but the work happens in thinking about and experimenting on a problem enough that eventually a solution starts to emerge—regardless of the tool used.

To concretize this, I wrote my own version of Javascript's splice() method. I’m sure my algorithm could be made better, cleaner, faster, and more efficient. But what a fun experience to realize, in practice, that a method like splice is really just a beautiful function, like my own functions.

Splice is a robust method. With one single line of code, I can shorten an array, remove items at specific index positions, or even insert multiple new items at a location. It works in place and therefore on the array itself.

In my own version of splice, I built a couple of dedicated methods to perform each major procedure. Things like shortening an array, deleting an item(s) at a particular location, and inserting as many elements as passed onto the function sequentially into the array.

A method to shorten the array:

Methods to delete an item(s):

A method to insert an item(s):

Finally, they all came together as a single splice method with a nice O(n) asymptotic complexity. Like in Javascript’s original splice, my splice method takes in as many arguments as needed, and based on that updates its behavior internally with no outside input.

All in all, lot’s to learn – but that was fun.

One of the proverbial things to understand in programming is the mouthful of why when we let a = 1, and we let b = a, and then we change a = 5, why b is still 1. In an attempt to clarify this, I created a one-sheet visualization of the matter at hand.

In essence, primitive values (i.e. things like strings, numbers, and booleans) are stored by storing the value. This means that the actual value is stored inside the variable. So when I tell the computer to store a in b, I’m not storing a link from b to a, but a copy of the value originally stored in a.

More complex values (i.e. things like arrays or objects, or in lay terms, collections of primitive values) are stored by storing the reference to the value. This means that what gets stored in the variable is a reference to the location in memory where the data is stored.

Javascript is now the fourth programming language I’m coming into contact with. I started by learning about how computers work in C; I learned to program in Python and Swift. Javascript is maybe the language I’m now learning technologies with.

One of the cool things I’m experiencing as I learn into a new language is how much easier it is to get started. Even though each language can have its own purpose and syntactic idiosyncrasies, they all share the same principles. Recursion is recursion is recursion no matter the language.

So, this time I took a non-linear approach to learning the nuts and crannies of Javascript and compiled the main things I wanted to retain in a single one-sheet.