Neurons

Source: https://cmns.umd.edu/news-events/features/2501

And now for something completely different!

Today I’m going off on a tangent so if you only wish to read about mortgages feel free to skip this post. If you enjoy reading about future technology though… read on!

HR

Wearables Are Difficult

No I’m not talking about the Moto 360 2015 edition Android Wear smart watch (although it is cool), I’m referring to proper, futuristic type devices like smart clothes that can fully replace our smartphones or things that create giant holographic screens when we raise our arm up, or Google Glass. 

They’re hard because they need to be both something large that’s easy for our clumsy fingers to interact with but at the same time small, unobtrusive and light/fashionable so it’s easy to carry or wear. We also don’t want to have too many of them as currently that’d mean having to fork out $500 or whatever for each piece of clothing, sun glasses or watch that we get, not to mention charging them all! So our clothes themselves will likely remain dumb as otherwise it’s too expensive to have all our shirts/pants be $500 computers. Also having 40 different “smart devices” would be a pain to manage data/battery wise (unless they all synced with each other etc which is possible). No. We need to make life simple but not simpler. And a phone is too bulky… too short lived… not big enough for full interaction but too big for pleasant carrying. Anything else we try and make will also suffer a similar fate, nothing can be “big” and “small” at the same time.

How do we solve this problem?

We Stop Caring About Hardware

Beyond this, the physical device (iPhone 6S, Galaxy 7) is no longer the thing that matters, the software, our data and the active configuration of the device is. This active config changes, upgrades, learns, matches to our patterns more and more over time. THIS is what we truly treasure with technology not the hardware itself but not many seem to realise this right now.

Old Phones

We throw away our old phones or they become dead lifeless relics stored in a cupboard or sold on eBay after we upgrade. We THINK we deeply care about our phones but we don’t… we like the AI inside our phones, the data in the cloud, the active configuration of the apps we have installed. The hardware changes, it flows every 2 years or more… ever improving and upgrading. What stays the same (generally) is the current active configuration of our on-line data/services. This is what we treasure, maintain, organise, and share. The hardware could die a thousand times and we wouldn’t mind (besides the monetary costs). If our on-line data died once though, it would be devastating. That’s why we back up!

We sync it across multiple devices so it can be with us in different forms. A single source of information and help but “in” many different physical real world objects all at once. You email isn’t just in your phone, it’s on your tablet, PC, someone else’s PC you login to, at work etc. This is the way AI works. It is not trapped in one physical object… it is in multiple ones all at once and this allows us to leave one of those physical objects behind but still have it with us. Quite the evolutionary leap.

Companionship

This companionship with our life long data and services gives comfort. To know we can, at any time, calculate impossible numbers, find out unending facts and keep track of all the things going on as well as communicate with anyone anywhere has become a symbiotic relationship. The person and their AI. The AI that remembers all their writings, all their conversations with friends and family, all their appointments and likes and dislikes. The AI that can help remind them of previous visual images and memories (your pictures). The AI that helps make important decisions and even predict our behaviour. We share everything with our AI and in turn we trust it and care for it and depend on it. Without it we feel like someone has taken a part of us away. It sounds odd but seriously try it, leave your phone at home one day, go to work and see how you feel!

We can program it to be human like… but with today’s technology it’s an imperfect copy still and so only serves to creep us out. Until the program is at an acceptable simulation level we tend to prefer them to be clearly artificial. Polite and kind but emotionless and deceptively dumb. We loath things smarter than us. We want to keep our AI, but not have to actually carry it with us. Have it contained in something so small it is always on you but never carried. Always there but never seen. Always near but never thought about.

A ring would be a fine way to conceal a companion AI… or perhaps a decorative button you attach to your clothes each day might be OK. However with no screen you would need a direct Brain Computer Interface (BCI) in order to communicate and command it. To best support the BCI it is likely better to have it in close proximity to the brain. Most commonly the ear or glasses are used to attach to the head but both of these are intrusive and uncomfortable usually. We will wear them if we must but we prefer not to. Fortunately this hardware will go internal once it is small enough, perhaps inside the mouth first or deep in the ear.

The Development Of BCI

As soon as BCI is developed, the best use of it will not only be for the transfer of data from AI to human brain… but the transfer of data from ANOTHER human via two AI’s and into your brain. A true telepathy where you can talk to others at any distance simply by thinking. Even the most basic of communication (saying “hello”) will be stunning. To be able to simply think “hi babe” and have her respond in your mind when she is in a different city would be amazing. Let alone the sharing of visual images, touch perception or pure knowledge itself. It is because of this that BCI’s are quite possibly one of the most important technologies that are needed to be developed. Without it, our AI will always be disconnected and have both hands and feet tied behind it’s back.

The focus should not be on the physical hardware device. That will evolve over time and be worked on by millions of people and machines alike always evolving. No, the important part is the BCI. The ability to send information directly into the brain without a direct line of site or anyone else hearing. A constant and very secure communications pathway between your brain and the AI. A WiFi network like any other… it’s just that one end is human not machine this time.

Going right back to basics, a standard telecommunications link simply alters something physical and we previously agree that the alteration has two states 1 and 0. Given our brains do not work like this translation is required in real time. Our brains cannot physically adapt or change really, so it is easiest to make the alteration in the computer part.

The best models on how the brain works go something like this:

Neurons fire one after the other when a certain thing is detected. This may be at the most basic level the edge of the letter “A”. As more and more neurons fire from seeing this letter, they trigger a higher up neuron that will be responsible for detecting the entire letter “A” instead of just one of the edges. Again, up the hierarchical chain it goes now with more neurons firing due to more and more letters eventually triggering a neuron that is responsible for detecting a specific word. This continues going up higher and higher in complexity until you’ve read say the entire sentence and then understood it with relation to a previous experience, say “Arrow is my dog”. This might then cause the firing of other neurons that prompt a “feeling” of love for the dog or familiarity to the past.

In order to “hear” those same words in our brain, those same neurons would have to fire in that same sequence. These neurons are in multiple locations and always different in every person. To setup or calibrate the BCI one might record the brain’s neurons firing while the subject reads every English word in their mind. The BCI would map “hello” to whatever neuron pattern was fired when you read it. This would take reading around 10,000-20,000 basically random words. At roughly 600 words per page that’s about 30 pages of solid text. Not a fun way to spend your weekend but it’d only need to be done once.

Importantly the BCI wouldn’t need to understand the words, simply map them to a neuron pattern and then be capable of triggering that neuron pattern again when needed. Thus the BCI would need to be able to both scan and detect individual or groups of neurons firing as well as trigger individual or groups of neurons to fire. The scanning of neurons would not only be important for calibration but also to detect the user’s commands.

Hardware Of A BCI

We’re getting a bit more complicated than normal here but stay with me!

A BCI would be broken up into two main parts: The main command unit, and the billions of sensors that will be attached to each and every neuron.

A Neuron

Source: https://biologywedscomputer.wordpress.com/tag/neuron/

This is how a neuron works. When the neuron fires, the neural impulse (or an electrical signal) travels down the axon and to another neuron via the synapses.

The main command unit piece (picture something like your mobile but with no screen) would receive information just like your smartphone in 1’s and 0’s over a traditional WiFi/Bluetooth style network and then look up it’s giant database table to convert the digital message it received into the right neuron pattern firings. E.g. “Hello” would be received in 1’s and 0’s. Once the word has been received it knows that it must make a certain number of neurons fire in a specific order for the brain to experience that word. Each main command unit would build up a unique pattern database specific to that user that would likely evolve over time.

The ultimate BCI is having a sensor device implanted inside or sitting on each of the persons neurons. All their neurons. Each sensor would have the ability to detect the neuron firing as well as trigger that neuron to fire in real time and transmit/receive instructions to/from the main command unit in real time.

As a side note, the security surrounding this technology would have to be immense as an attacker could have full control of the person’s body, mind and also upload thoughts, ideas, believes, everything. You could hack and mass exploit millions or even billions of people at the same time… however people will still accept the risks as the rewards the systems would provide would be so great, in fact it’d be impossible to NOT have a BCI once they’re invented given others would have such advantages. It’d be like trying to compete with someone without using any digital infrastructure, you’d just lose.

When functioning properly, you would think of a command over a few seconds (say “what’s 80 x 26?”), the billions of sensors would record all the neural firings in your brain and transmits this data to the main command unit. The main command unit would then interpret the results by looking up what each of the neuron firings “mean”, calculates the answer and then converts that answer into a set of neural firings sending it back to the sensors. The sensors then trigger the needed neurons to fire and that would make it seem like your brain telling you “the answer is 2080”.

The time it would take for the “thought” and “answer” could happily take 1-2 seconds. Also reading the neural firings would have to be done over a few seconds as we don’t think instantly and the sensors would need to be monitoring ALL neurons at a temporal interval of at least once every 5ms in order to catch the neuron firing. So given the human brain has about 86 billion neurons, you’d have to have real time recording of all of them requiring a bandwidth of say, 200 bits per second per neuron sensor max (1 / 5ms = 200), at 86 billion neurons = 17,200 billion bits per second or 17.2Tbps. Wirelessly. A speed of 30Tbps would be preferable for the overall system as you’d need overheads in the data transfer too.

Individually though, each sensor would only require a wireless bandwidth of 200bps, not that hard to do, it’s just that with ALL the 86 billion sensors communicating at once, the data stream would be huge but still manageable by even today’s standards…. just. That being said, there’s no way all 86 billion would be communicating simultaneously, just like no one uses 100% of their brain all the time. So the overall data transmission rates would be a lot less.

Size Of The BCI

The size of a neuron ranges from 4 microns (.004 mm) to 100 microns (.1 mm) in diameter. Thus the sensors would have to be around 5 microns max. The general design of the sensors would involve basically a nanocapacitor, power circuitry and a wireless transceiver. When the neuron fires, the sensor would draw some of the neurons generated voltage and use that to charge up it’s nanocapacitor. At the same time, the increase in battery storage would trigger the sending of a specific code via the wireless transceiver on a specific frequency thus indicating that the neuron has fired a specific axon. In this way, the sensor remains fully charged and can detect individual axons firings all in one action with minimal on board complexity.

When the sensor receives an authorised wireless signal from the main command unit, it would use some of it’s nanocapacitors stored energy to induce current in the neuron’s axon, triggering it. This is done via direct physical contact with the neuron and shunting of the nanocapacitors voltage to the axon. This “fires” the neuron just as per normal with minimal loss of energy. Given that there will be significantly more “readings” of the neurons firings than “writing” (induced firing) it is likely that the sensor will have ample neuron firings to charge up it’s nanocapacitor reserves to full before it is required to use some of that stored energy to fire the neuron again.

The sensors would be no bigger than 5 microns, have a wireless transceiver with a bandwidth of 200bps and distance of perhaps 20cm and finally a nanocapacitor with the capacity to generate the ~110mV potential change that happens during a usual neuron firing.

The sensor would also have to not only monitor the neuron, but also be able to identify which axon the neuron fires seeing as each neuron may have dozens of axons leading to different destinations. It would also have to be able to trigger the neuron to fire the exact axon that it is instructed to. This would either increases the complexity of the sensor, or just mean that each neuron would need multiple sensors attached to each axon. This would also reduce the size of sensor maybe by a factor of ten. I can see this being a “version 3.0” type of system, where 1.0 might be a few billion sensors on some key neurons (but not all), version 2.0 might be more advanced sensors on ALL neurons and then finally version 3.0 would be multiple, even more advanced and smaller sensors in all neurons to cover every neuron and every axon.

Building A BCI

According to how computers decrease in size over time, we should be able to build computers in the 5 micron size range by about 2020-2025. By this point computers the size of mobile phones today will have the equivalent power of around 30 TFLOPS or the same as the most powerful supercomputers available today. Without a screen or physical interface needed, this device could be roughly 5cm x 3cm in size (or smaller) and would be worn by the user close to their brain in order to receive all the sensors signals, say as a necklace. Depending on how powerful the sensors transmit their signals, the main command unit could even just be thrown in your pocket, wallet or purse.  This main command unit would receive and process all the signals from the 86 billion sensors as well as contain the AI and other communications equipment like today’s phones do. The main difference though would be that you’d never have to actually take it out and use it like today’s phones so it could be just stored in your wallet/purse the same as a credit card.

This main command unit and 86 billion neuron sensors will together become the non biological brain inside your brain. Your personal AI that works with you, for you, faster than you can think.

Coming soon, in roughly 2025…

For the newer readers... if you’re interested in learning more about being mortgage free in under 10 years, automatically and without cutting back on the things you love... You’ll probably like How To Pay Off Your Mortgage Early, Go From No Idea To Mortgage Free In Under 10 Years.

The benefits include: 1) How to pay off your mortgage faster than 99% of people with one hour a month of work 2) How to get rid of your debt and have the freedom to spend money on the things you love, guilt free 3) Clear outline of how to setup your expenses, mortgage and general finance 4) How offset accounts work and how to get the same result without being gouged by the big banks 5) How to cut through the crap and focus on the things that truly matter when taking down a mortgage 6) How to adjust the strategy so it works for you, even if you have kids, even if you only have one income 7) How to do all of these things and maintain a normal social life (and never be cheap).


LIMITED TIME SPECIAL! - 52% OFF!




Alex Shoolman

How To Pay Off A Mortgage Early – What if you could be mortgage free in under 10 years, automatically and without cutting back on the things you love? – Super Special! – 52% off until the end of November!

Editor in Chief at Alex Shoolman – Learn how to improve your life with technology, see where it’s going in the future and how you can take advantage of it.

Editor in Chief at Mutilate The Mortgage – We help people go from “no idea” to mortgage free in under 10 years.