
Defining our goal
Essentially, our task here is to conceive a mechanism that is capable of dealing with any data that it is introduced to. In doing so, we want this mechanism to detect any underlying patterns present in our data, in order to leverage it for our own benefit. Succeeding at this task means that we will be able to translate any form of raw data into knowledge, in the form of actionable business insights, burden-alleviating services, or life-saving medicines. Hence, what we actually want is to construct a mechanism that is capable of universally approximating any possible function that could represent our data; the elixir of knowledge, if you will. Do step back and imagine such a world for a moment; a world where the deadliest diseases may be cured in minutes. A world where all are fed, and all may choose to pursue the pinnacle of human achievement in any discipline without fear of persecution, harassment, or poverty. Too much of a promise? Perhaps. Achieving this utopia will take a bit more than designing efficient computer systems. It will require us to evolve our moral perspective in parallel, reconsider our place on this planet as individuals, as a species, and as a whole. But you will be surprised by how much computers can help us get there.
It's important here to understand that it is not just any kind of computer system that we are talking about. This is something very different from what our computing forefathers, such as Babbage and Turing, dealt with. This is not a simple Turing machine or difference engine (although many, if not all, of the concepts we will review in our journey relate directly back to those enlightened minds and their inventions). Hence, our goal will be to cover the pivotal academic contributions, practical experimentation, and implementation insights that followed from centuries, if not decades, of scientific research behind the fundamental concept of generating intelligence; a concept that is arguably most innate to us humans, yet so scarcely understood.