Francisco
Pax
Info

Everything is data. From the collection of characters in a name, to the digits in a phone number, and all the way to images, videos, websites, or video calls on zoom.

The ability to express data is, therefore, the first big idea in a programming language. A primitive expression, as we call it, can be as simple as a number, a word, a decimal like pi, or even something as elementary as a truth or a falsehood. What makes an expression is that it evaluates—or adds up—to some final thing the computer can store in memory. Primitive expressions, such as these below, simply evaluate to themselves. That's what makes them primitive.

But what do we mean by evaluate? Remember, a computer records data by imprinting electrical signals onto a physical piece of memory. Take, for example, the number 33 that we express using the decimal number system.

To store it, the computer will make electrical marks in memory slots. In other words, the computer has to evaluate the number to its binary equivalent, 00100001.

The same applies to something a little more sophisticated like a word. The name Frank, for example, is a collection of letters such as F, r, and a, which are expressed at a lower level by the numbers 46, 72, and 61, which in turn boil down to 001... you get the point. We express what the computer can evaluate.

Expressions can also be combined, this is the second big idea. Take the simple additions below. We use the plus symbol or the add word, its equivalent in function notation, to combine two primitive expressions together. The numbers 3 and 2 no longer evaluate to themselves individually but to 5, their sum. Not only have we combined two primitive expressions, but we also gave the computer our first instruction: add two numbers together and evaluate the result.

It all seems simple and straightforward, right? Well, not so fast. There's something here that we're taking for granted. Adding two numbers together with the + symbol might seem trivial to us, humans, but when you consider that a computer is just electricity, how is it that it knows what to do with the symbol + or the word add ?

Somewhere in your computer, a programmer left a series of instructions on how to add numbers together in binary and then return the result in decimal notation. These instructions are packaged in a "little container" with the name +. This so we get to type + without having to think about how to add two numbers together using nothing but electricity.

This is the third and most powerful idea in a programming language: the ability to abstract; to make large collections of instructions available simply by their name. Just like when we drive a car there are mechanisms integral to our driving that have been abstracted away in the form of icons on our dashboard or pedals under our feet, so it is with computers and code.

Previously complex collections of instructions are made primitive to us in a "higher level" programming language. We get to stand on the shoulders (and code) of others and use their building blocks to make our own. We can, for example, combine a series of numbers and operators to recreate the process of converting temperatures from Celsius and Fahrenheit degrees. We can do so using infix notation:

Or using function notation—slightly more complicated, at first, but just the same as the notation in the slide above:

Finally, just like the programmers who created the instructions in the + symbol, we now get to abstract our instructions and give them a name of our own:

Only to arrive at where we started.

In short, a programming language is a series of symbols, keywords, and "glyphs", each standing for instructions at a lower level, which in turn stand for ever deeper collections of instructions until we're left with nothing but literal electricity.

In the beginning, most of us learn to code by writing a simple program that greets the world. Some do it in an older programming language like Lisp or C, others in a more modern language like Python, Javascript, or Swift. Much is said as to which is the best language to learn, but they’re not actually all that different.

Programming isn't really about the language you do it in.

Computers—people who spent their day literally counting for the fields of mathematics, engineering, and navigation—emerged first in the 1600s as a job title. As calculations grew larger, machines were invented to do the job faster and with less mistakes.

A notable example of this was the tabulating machine, invented in response to the US Government's need to develop a more efficient way to count the Census of 1890. Counting the Census was estimated to take more than a decade to complete, but with this new technology, it was completed in just over two; this was huge at the time.

Eventually, the modern computer was born. A more general-purpose device that a programmer would use to input a set of values and instructions, to output a result. With time, these modern computers grew in sophistication. We invented ways to represent more facets of our daily lives—the alphabet, words, floating-point numbers (like 3.45), images, videos, all the way to modern day 3D games, and voice interfaces like Siri—all with a simple zero and a one.

One after another, these inventions led to the computers we use today, and even though our phones and laptops are a lot more sophisticated than their predecessors, their essence remains the same: Input data, process it, output the result.

Type in an email, hit send, output an email in the recipient's inbox. Type in an essay, hit export, output a PDF document. Upload a photo, hit post, output an Instagram post.

Programming is about the process that takes place between input and output. Just like we use common language to express thought, and mathematics to deal with quantities and measurements, we use code to describe how to do something.

Do programming languages play a role? Absolutely. Different languages have been created over the years that focus on different aspects of a computer, but the essential building blocks of programming exist in all of them.

Programming is the art of taking a complex challenge, thinking creatively about how to break down, and ultimately building a process back up that solves it. What we can do with this creative thinking extends as far and wide as the frontiers of our knowledge and imagination.

Programming typically begins with a programming language, but let's first consider what a computer is.

Take for instance, a table—what is a table? It's where you share a conversation, work on a project, eat a meal, write a story, keep books you're reading, leave keys when you come home, stack letters you haven't yet open.

But a table isn't what we do on it. A table is a multi-layered object made of wood; that wood, an intricate pattern of fibers; those fibers an intricate structure of molecules, atoms, and eventually, pure energy. Your table is all these things.

Your computer is just like the table. The folders and files on your desktop are like the binders and papers on your desk, the books on your Kindle are like the books on your table, that word document you have open is the digital version of your notebook.

The objects on your computer seem so real that the computer, itself, has become invisible. Instead, you see text, images, favorites, todos, emails, work assignments, websites and the people you interact with.

Learning to code is to strip away the objects and see the computer for what it really is. Like the table, the computer is made of many layers. An image isn't quite like an analog photo , a movie isn't quite like the movies of the old days. These digital objects are collections of numbers that your computer turns into the visual experience you are familiar with on screen. From the words on a word document, to a movie on YouTube, or a conversation on FaceTime everything is numbers; everything in your computer is data.

Take a movie, for example. A movie is made of moving images, and an image is made of squares of color, but what makes up a color—say, orange? Yellow and red (obviously), but a computer doesn't process color like we do. On a screen, all colors are made of varying degrees of red, green, and blue, which are represented by a range of numerical values between 0 and 255. What we perceive as orange is 243 red, 83 blue, 45 green. Therefore, what for us is a movie, for a computer is billions of numbers.

Take another example, your desktop. In the same way that numbers represent a movie, your desktop stands for a series of internal processes and programs. Your cursor and your folder (seemingly separate objects), aren't in actuality separate at all.

It doesn't end there. Code might seem like the end of the line, but it too functions as a human-friendly representation of lower level machine instructions; all of which can eventually be reduced to ones and zeros, the basic expression of an electrical signal.

Nothing is quite what it seems.

I recently stumbled upon an old project I forgot I built. As a personal exercise, I coded a basic replica of Apple's iPhone calculator, in Javascript, from scratch. I'm posting it here for my own reference: the live calculator and the Gitbhub repository.

Functional programming is an idea, a way of approaching programming, that borrows from mathematics and its idea of what a function is.

In computer science, a function can be defined as a bundle of code that does something—it mutates a data collection, it updates a database, it logs things onto the console, etc. If we want, we can even make it do many of these things at once. A function, in computer science, is a set of procedures that get given a name and can be passed around and invoked when needed.

In mathematics a function has a stricter definition: a function is a mapping between an input and an output. It does one thing, and one thing only, and no matter what you give it, it always produces the same result. In addition to this mapping, the function never mutate sthe input. It produces the output based on what we pass it.

What functional programming is—at a high level—is the use of the mathematical definition of a function in computer programming. In functional programming, we reduce a problem to small single-purpose functions that we can assemble together like LEGO blocks. This can be boiled down to three core principles: 1) A function will always only look at the input; 2) A function will always produce an output; 3) All data structures are immutable.

The beauty here is that given, say, a collection of numbers, we can run it through a very complex set of functions and still be sure that our data remains exactly the same in the end.

The function only mutates values inside its scope, but anything coming from the outside remains the same.

In functional programming, there’s an emphasis on clarity, both syntactical and of purpose. Each block has one purpose and nothing else. We don’t need to understand the function in order to use it. We call it and, no matter how complex its procedures, it should always produce the same output.

The benefit is that each function can be made and tested in isolation since it does just one thing. And over time, the function can be optimized and made a lot better without it ever impacting the code where it is called. But, in a world of pure functions, there's still a need to bridge into the real and more messy world of side-effects. These are anything from logging to the console, writing to a file, updating a database, or any external process. The key here is to separate all code that produces side-effects from the pure logic of a program and isolate them.

Lastly, with functional programming, there is an incessant creation of copies of the same data, given that functions do not modify their input. This is problem has been solved by persistent data structures.

My learnings for this post came from here and here.

All data in a computer is stored through a binary electrical system – binary as in bi, two. The bit, the computer’s unit of data, is expressed through an electrical signal or the lack thereof. This signal is managed by a transistor, a tiny switch that can be activated by the electrical signals it receives. If the transistor is activated, it conducts electricity. This creates an electrical signature in the computer's memory equivalent to a 1 or a truth. Otherwise, the lack of signal is equivalent to a 0 or a false.

The basis of this binary system, as we have it today, was first introduced by Leibnitz in 1689, as part of an attempt to develop a system to convert verbal logic into the smallest form of pure mathematics. It is said Leibnitz was actually influenced by the i-Ching 🤯 and was attempting to combine his philosophical and religious beliefs with the field of mathematics. Together with George Bool’s work in logic and MIT’s Claude Shanon paper relating them to computing, this was basis for the simple and yet incredibly ingenious system behind today’s digital computer.

There have been ternary and even quinary electrical systems developed in the field of computing. But the more complex the system, the harder it is to tell the difference between different types of voltage; specially when the computer is low on battery or its electrical system interfered with by another device (i.e. a microwave). So the world settled on binary, the simplest and most effective system. The voltage is either there or not.

That's how we get zeros and ones: electricity.

As of late, I've been practicing giving up control of my story and instead let it unfold as it wants to. Like Jobs said at Stanford in 2005, you can't connect the dots looking forward; you can only connect them looking backwards. You have to trust that the dots will somehow connect in the future.

I make a mean investor pitch deck, I think design products, think about brands in terms of the people they're for, I program and love algorithms, I've read most of Carl Jung's books, I visualize information, make art, enjoy learning about pretty much anything—i.e. economics, biology, art, code, design, or architecture. These are my dots.

Ballet is beautiful because of the opposing tension in a contorting body. It seems as if the leg is twisting to the left, but it is also, in fact, moving toward the right. A comedian's joke is funniest when delivered with deadpan seriousness; it's her seriousness that makes it funny. A piece of art is captivating when it's both familiar and foreign. We're drawn by what seems almost like...but not quite.

This ambiguity also exists in our interior reality; we have contradicting personality traits, our interests are at odds, our goals collide. Some days we wake up painters, on other days, poets and scientists. My intuition tells me that these nonsensical contradictions might be what makes life beautiful—as it does ballet, a joke, or a piece of art. I find this possibility invigorating. When my day feels confusing and my interests unruly, I tell myself: embrace ambiguity.

What is it about being alone at a coffee shop—about being alone in public, anonymous amidst an audience of strangers? Perhaps, between the anonymity and exposure, we're free to lose a little of who we are and invent a little of what we can become.

I’ve had a somewhat liberating epiphany recently. The methods built into a programming language can be written from scratch using primitive building blocks like if-else statements and loops. Built-in methods exist to bundle complicated procedures behind one simple interface; but they're simply solutions to common problems so a programmer doesn’t have to write them over and over again. Programming is problem solving, whether I use complex or simple tools.

It's the same in design. There are many of nuts and bolts to every tool. Sketch and Figma are filled with smart details meant to make a designer’s life easier. But I also know, by virtue of my experience, that all I need is a blank canvas, the rectangle tool, type and color. Tools are helpful, but the work happens in thinking about and experimenting on a problem enough that eventually a solution starts to emerge—regardless of the tool used.

To concretize this, I wrote my own version of Javascript's splice() method. I’m sure my algorithm could be made better, cleaner, faster, and more efficient. But what a fun experience to realize, in practice, that a method like splice is really just a beautiful function, like my own functions.

Splice is a robust method. With one single line of code, I can shorten an array, remove items at specific index positions, or even insert multiple new items at a location. It works in place and therefore on the array itself.

In my own version of splice, I built a couple of dedicated methods to perform each major procedure. Things like shortening an array, deleting an item(s) at a particular location, and inserting as many elements as passed onto the function sequentially into the array.

A method to shorten the array:

Methods to delete an item(s):

A method to insert an item(s):

Finally, they all came together as a single splice method with a nice O(n) asymptotic complexity. Like in Javascript’s original splice, my splice method takes in as many arguments as needed, and based on that updates its behavior internally with no outside input.

All in all, lot’s to learn – but that was fun.

There’s usually a next step encoded in my mind in the form of an intuition or a hunch. Maybe there's a couple, but I venture to say one is usually the optimal. Sometimes I wonder what would happen to life, as I know it, if I lived only from that small and intimate space of knowing. No grandeur, no being this or that type of person, just a continuous dialogue with that intuition. What’s the next step… and after that, what’s the step after that?

As children, we learn what to do and not do through parental approval, and unless we grew up with a a parent aware of the subtle value of mistakes, most of us might not know how to be bad, and therefore good at something. It is virtually impossible to become good at something without first being bad at it.

My intuition is that grit, hustle, persistence, are qualities rooted in our ability to override the primal instinct to want to be accepted, validated, and a good child. Overriding it with a new kind of ability of being embarrassingly bad, asking dumb questions, seeming unintelligent, until—little by little, step by step, bird by bird—it all starts to come together.

Underline is an app I often wish I had when reading a paper book. Yes, I do read on Kindle, and have become rather a sparse consumer of physical books out of care for the planet. But, every once in a while, I’ll treat myself to a flesh-and-bone book to go through the joy of underlining great thoughts, encircling interesting ideas, and annotating my own takes on the margins.

Often, I find myself wanting a single-purpose piece of software that uses the might of text recognition but only on underlined text. A simple app to bring the incredible power of the digital marginalia into the magical world of a physical book.

Societies emerge from the cooperation between people. This cooperation organizes people into communities, companies, governments, multinational bodies, and one day even multiplanetary organizations.

Money has been one of the driving forces behind this cooperation; a tool that allows us to exchange value with each other. But that is only one facet of our sophisticated reality.

The terms of any cooperation have to be agreed upon, even if implicitly. As a result, there’s an entire world, invisible on a daily basis, of contracts, agreements, term sheets, lawyers, notaries, even governmental bodies, that together serve the function of enabling and preserving cooperation.

From two friends agreeing to collaborate on a side-project, to the contract of marriage, all the way to the opening of a bank account, the purchasing of a house, the wiring of money for the payment of a service, the fine print of an insurance policy, or even the law that governments uphold.

We managed it relatively well until the computer came along, then the internet, and finally the inevitable digitalization of our world. We spent the last 30 years, or so, truly coming online.

The prospect of a world in which all forms of content and communication are in digital form on easily modifiable media raises the issue of how to certify when something was created and what its contents are.

The above paragraph is a paraphrase from a paper published in 1991, in the Journal of Cryptology, by Haber and Stornetta, on how to time-stamp a digital document. The ideas in this paper, some argue, were foundational to the beginning of the thought process that would later lead to the blockchain.

Issues of validity and truth were already emerging as early 1991, as we recognized that the digitalization of the world came with a whole new set of challenges. Agreements are modifiable, documents are hackable, terms are forgeable, in ways that can easily be hidden to the naked eye. These are issues that we’ve come to know quite well in the collective imagination with the emergence of deep-fakes, corruptible elections, and the challenges of the 24/7 social-media enabled news cycle.

In the physical world, in the old world, we might have written the truth down in numbered books, with no pages left blank, signed, stamped, and stored safely. That alone, digitalization aside, was prone to error and forgery. Now scale that kind of book-keeping to a global scale. That, coupled with computers and bits, mutable in nature, seems to have inevitably led to the emergence of the blockchain.

I say inevitably, but only in retrospect.

“The blockchain is a digital, decentralized, distributed ledger. Most explanations of the importance of the blockchain start with money […] But money is only the first use case [...] and it’s unlikely to be the most important.”

That is the opening line of The Blockchain Economy: A beginner’s guide to institutional cryptoeconomics, a medium piece by Chris Berg, Sinclair Davidson and Jason Potts. I mention it here because it was this piece that gave me the mental model to contemplate the bigger societal landscape from where the blockchain emerges.

When reading and learning about the blockchain, it’s easy to come across a certain understanding of it as a “new technology” in the way that Apple’s new M1 chip or Artificial Intelligence are new technologies.

But central to understanding the blockchain is seeing it more as an idea than a technology. One of those apparently simple and obvious ideas that come along once in a while. Obvious in the way that the wheel is obvious – that is not obvious at all, it took us all the way to 4000 BC to come up with it. Yet, when it came about, it fundamentally altered the course of culture for the better.

Today, societies are made of citizenship, voting, laws, ownership, property rights, contracts, legalities, who can do what and when … and central to all this are ledgers which, at their most fundamental level, map these economic and social relationships.

The genius of the blockchain is that, on one hand, it’s just a ledger. But on the other hand, it’s a radically new idea for how to do just that. In its simplest form, it’s made up of two parts.

The first is the idea of storing of information in such a way that each record carries with it a fingerprint. This fingerprint is an abstract representation of both the current record and the previous record. In real terms, when a block of information is created, a large number is generated based on the interweaving of the data inside the current block as well as some of the data from the previous block.

This results in the chaining of information, as it’s stored, in such a way that it becomes very hard to manipulate or compromise it. Simply put, if I change one block, then I have to change the blocks around it because the fingerprints have to match. Then if I do that, I have to also alter the blocks around those blocks, ad eternum.

The second idea is that no one computer, authority, institution or government is responsible for the bookkeeping. The blockchain is stored over and over again in multiple computers, owned by multiple people, across multiple countries,

Every time new records are added, computers from around the world compete with each other to update the blockchain, and get rewarded based on the validity of their update. That is then cross-checked by other computers in the network and only then, once all is squared away, does the blockchain get updated across the remaining computers.

The probability of the same computer, person, agent, or organization, updating the blockchain twice in a row is, as it currently stands, very low.

Together, these ideas form a global, decentralized, and hyper-secure way of storing information. When seen through this lens, this might be an invention akin to ideas like democracy and capitalism. Ideas that are structuring to the fabric of our world.

This is a system with the potential to enable cooperation at the planetary scale; regardless of any one person, organization, institution, or country; by being a transparent and secure account of what was said, what was agreed upon, what was done, what was traded, what was sold, …

Berg, Davidson, and Potts are not exaggerating when saying that the blockchain competes with firms and governments as a way to coordinate economic activity; read global cooperation. It is no wonder, then, that the blockchain emerged in the aftermath of the financial crisis of 2008. A time when it became apparent that the old system could be manipulated for the benefit of a few, at the expense of the many.

Comparable to the invention of the wheel, and mechanical time, the printing press, the blockchain might be about to open up entirely new categories of economic organization that had until now not only been impossible, but un-imaginable.

I’m left nothing other than awe-struck, inspired, and energized.

Credits

Personally, I found it useful to take a step back from all the buzzwords and the threads on social media and see the blockchain through a broader  and more agnostic lens. The vision introduced here is not mine. I’m here articulating it in lay terms as a means to clarify things for myself. But head over to medium and read Berg, Davidson and Potts’ piece. I’m here to learn, not to be an expert; so if any part of it could be made better, by all means, do reach out and share your perspective.

We’re all desperate to be recognized for the things we have to offer. Everyone around you is looking for the invitation you are making to them. Quite often, we’re existentially disappointed because there is no invitation. The greatest invitation is for you to say to them that they have gifts that you do not have; and therefore you need their help. That is the most powerful leadership invitation you can make. — David Whyte on on Leadership, Making Sense podcast.

How beautifully paradoxical that good leadership is the ability to recognize the humanity in the other to the same extent that I recognize my own. To step out of my own need for recognition and allow others to come forth with their own gifts. To gift them, in return, the recognition they, too, seek. Every moment, an opportunity to invite as much as I want to be invited; to recognize as much as I want to be recognized, to lead even though at times it's easier to let myself be led.

One of the proverbial things to understand in programming is the mouthful of why when we let a = 1, and we let b = a, and then we change a = 5, why b is still 1. In an attempt to clarify this, I created a one-sheet visualization of the matter at hand.

In essence, primitive values (i.e. things like strings, numbers, and booleans) are stored by storing the value. This means that the actual value is stored inside the variable. So when I tell the computer to store a in b, I’m not storing a link from b to a, but a copy of the value originally stored in a.

More complex values (i.e. things like arrays or objects, or in lay terms, collections of primitive values) are stored by storing the reference to the value. This means that what gets stored in the variable is a reference to the location in memory where the data is stored.

Javascript is now the fourth programming language I’m coming into contact with. I started by learning about how computers work in C; I learned to program in Python and Swift. Javascript is maybe the language I’m now learning technologies with.

One of the cool things I’m experiencing as I learn into a new language is how much easier it is to get started. Even though each language can have its own purpose and syntactic idiosyncrasies, they all share the same principles. Recursion is recursion is recursion no matter the language.

So, this time I took a non-linear approach to learning the nuts and crannies of Javascript and compiled the main things I wanted to retain in a single one-sheet.

If I had to choose a philosopher I'd choose Socrates—even if he is a fictional character created by Plato. His death is a stark example of how flawed we can be in group-mind. True two thousand years ago as much as it is today in the current social climate.

Yes, there are things I find him guilty of. Most critically, of being so hellbent on educating others through ridicule. In aiming toward an ideal, Socrates failed to find the compromise that could have led his contemporaries to recognize their own limitations without it being at the cost of his own life. But, in aiming toward this ideal, he left us a sharp example of what to be both as individuals and as a people.

I thought Philosophy was a field of study like chemistry or physics; and it is, by virtue of its history. But there's also something deeply personal about it. The questions at the heart of Philosophy are questions fundamental to human life. Who am I? Who are we? What is my purpose? Am I living a good life? Philosophy is rooted in disquiet; by examining ourselves, we inevitably touch on ideas that pertain to all of us. How do we live together? How can we be fair and just to each other? How do we continuously improve as a people?

Philosophy has as much to do with legendary names like Plato and Aristotle as it does with me or you. It emerges out of the same unrest and curiosity at the heart of wisdom traditions like Zen, Buddhism, or Christianity. The questions, prayers, and koans we carry with us are the same questions that live at the heart of an entire field of study.

Where Philosophy is unique is that it asks these very fundamental questions in a scientific way—nothing is taken for granted and all knowledge it generates is rooted in human discovery. Philosophy is unique because, in being a method, it examines these perennial questions in such a way that can be poked at by others. Those same questions we carry with us every days are also a two-to-five thousand year old field of study. As relevant then as it is today.

I feel under-represented when I say I’m a designer—not because I’m not, but because it’s not an accurate description of what I do. My upbringing was shaped by a white MacBook with round edges and an internet connection. Like many, I acquired a plethora of digital skills and emulated from worldly people who did many interesting things. I have, as a result, turned myself into a Minotaur of sorts: head of a polymath, hands of designer, heart of a hacker, torso of a communicator.

When I start a new job, no one can define what I do. Two months in, I'm deep in the company story, making company pitch decks, re-designing the product, and hacking the website from scratch. The magic seems to happen at the margins. I'm a better communicator when I design the messaging, a better entrepreneur when I hack the product, a better hacker if I’m making the pitch deck. I'm no exception, my most talented friends are in a similar predicament.

Job titles make us one dimensional, but when we explore what we’re curious about and, as a result, acquire new skills and sensibilities, we end up becoming something akin of an indescribable Minotaur. When I level with people and, to paraphrase Judd Appatow, tell them I'm figuring it out and patching it up to make it look like I have a clue, that's when I get the most interesting responses. Turns out there's a polymath inside most of us.

My classmates stared at me, perplexedly, as I tried to read aloud. It was like a balloon had inflated in my throat; no sound came. My mum says the stuttering began in kindergarten, but this moment is the first I remember. My dad stuttered too, but unlike me, only occasionally and without shame. For my parents stuttering wasn't a problem, and speech therapy was of no use. Yet, it was always there: when I introduced myself, when called on in class, when ordering at a restaurant, when answering the phone, and worst of all, when strangers finished my sentences. This was a battle I had to fight on my own, and I was convinced speaking would one day be my superpower.

So I searched, in books and on the internet, for other people's experiences that could help me with my own. I discovered Demosthenes, one of ancient Greece's greatest orators, who stuttered. I studied the work of Jill Bottle Taylor, a neuroanatomist who documented her recovery from a severe stroke. I learned about Barbara Arrowsmith-Young, a pioneer in neuro-plasticity who taught her learning-impaired brain how to tell time and went on to become a renowned psychologist. My bedroom became my laboratory, filled with ideas and schematics on yellow sticky notes. As I learned about others, their discoveries became my own. I filled my mouth with pebbles to connect with the physical experience of speech, I read aloud in front of the mirror, I learned breath work—all with the discipline of an athlete.

At 19, I had my first breakthrough. While settling in eye contact during a conversation, I entered a state of flow—much like that of athletes—and the figment of an image appeared in my mind, an abstract representation of what I wanted to say. As I focused on it, words started to come out. That was the first time I spoke without interruption. In the years that followed, I took every opportunity to harness this flow—with friends, family, and even strangers. A decade later and I'm still puzzled by how it all works. A hiccup here, a stumble there, but today these are part of my manner of speech.

My stutter shaped me greatly. It taught me to challenge the way things are, to seek knowledge in unordinary places, and most importantly, it taught me the value in seeing the world through someone else's eyes.

Today, I see that we are shaped by the stories we carry with us; who we are, what we believe, and what we think we can and cannot do. These stories are like software for the brain—we run on them—software that can help us rediscover who we are in new and empowering ways. It were stories, after all—of an orator, a neuroanatomist, or a psychologist—that gave a young stuttering boy the means to transcend his own.

Firstly, let's reduce all controls in the mobile browser to one single button—like the iPhone did to the mobile keyboard—and have it be the central access point to all controls; from the url input to tabs and sharing.

Secondly, let's give websites a solution to manage and distribute announcements without them becoming a nuisance on screen.

Third and lastly, on websites that take proper advantage of the heading system, let's incorporate the headings into the scroll bar. This so that I can scroll up to a specific section of the website without having to keep going all the way to the top.

A pitch I developed for a friend setting up a venture capital network to pair first-time founders with investors and entrepreneurs. The pitch itself is confidential, but including here some of the surrounding work.