Part VII — The Frontier

Neural Interfaces

Brain-computer interfaces, Neuralink, and the convergence of biological and artificial intelligence.

The previous chapter asked whether a different computational substrate could change what AI can do. This chapter asks a more radical question: what happens when you stop treating biological and artificial intelligence as separate systems and start wiring them together?

Brain-computer interfaces (BCIs) sit at the intersection of neuroscience, electrical engineering, materials science, and AI. The premise is straightforward: if the brain processes information as electrical and chemical signals, and we can build devices that read and generate electrical signals, then we can create a direct communication channel between brains and machines. The engineering is anything but straightforward.

BCI Basics: Reading and Writing

A brain-computer interface does one or both of two things:

Recording is more mature. Stimulation is harder, less understood, and raises deeper questions. Most current BCI research focuses on recording, with stimulation used primarily in established medical devices.

The fundamental challenge on both sides is resolution vs. invasiveness. You can get better signal quality by putting electrodes closer to (or inside) the brain, but this requires surgery and risks tissue damage, infection, and immune response. Non-invasive methods avoid surgery but sacrifice spatial resolution and signal quality.

BCI Approaches: Resolution vs. Invasiveness Invasiveness → Signal quality → EEG scalp fNIRS scalp ECoG brain surface Utah Array Neuralink threads in cortex Non-invasive Invasive Individual neuron resolution Population averages

Non-Invasive Methods

EEG (Electroencephalography)

Electrodes on the scalp measure voltage fluctuations from the summed activity of millions of neurons. EEG has been used since the 1920s and is the most accessible BCI technology — consumer-grade EEG headsets cost a few hundred dollars.

EEG can detect broad patterns of brain activity (alpha waves during relaxation, beta waves during concentration, event-related potentials in response to specific stimuli) and has been used for BCI applications like P300 spellers, where the user focuses attention on letters that flash on a screen and the EEG detects the brain's response to the intended letter. But EEG's spatial resolution is poor — centimeters at best — and the signal-to-noise ratio is low because the skull and scalp attenuate the signal heavily.

fNIRS (Functional Near-Infrared Spectroscopy)

Measures changes in blood oxygenation in the cortex using near-infrared light. Like EEG, it's non-invasive and wearable. Unlike EEG, it measures metabolic activity rather than electrical activity, giving it better spatial resolution (~1 cm) but worse temporal resolution (seconds rather than milliseconds). fNIRS has been used for simple BCI tasks like yes/no classification from prefrontal activity.

Both non-invasive methods share a fundamental limitation: they measure the aggregate activity of large neural populations, not individual neurons. The information content in these signals is low — enough for simple commands (move cursor left/right, select yes/no) but not for the kind of high-bandwidth neural readout that would be needed for, say, speech decoding or fine motor control.

Invasive Approaches

BrainGate and the Utah Array

The BrainGate project, led by researchers at Brown University, Stanford, and other institutions, has been the most significant academic effort in invasive BCIs. It uses the Utah array — a small silicon chip (about 4mm × 4mm) with 96 needle-like electrodes that are implanted into the motor cortex.1

BrainGate's achievements are the clearest demonstrations of what invasive BCIs can do:

The Utah array's limitation is its rigidity. Brain tissue is soft and moves relative to the skull; the rigid silicon electrode array causes tissue damage over time, and the body's immune response gradually encapsulates the electrodes in scar tissue, degrading signal quality over months to years.

Neuralink

Neuralink, founded by Elon Musk in 2016, is taking a different engineering approach to the same fundamental challenge. Instead of rigid silicon arrays, Neuralink uses flexible polymer threads — thinner than a human hair — with electrodes distributed along their length. The flexibility reduces tissue damage and immune response compared to rigid probes.

The key engineering innovations:

Neuralink's first human implant was placed in January 2024 in a patient with quadriplegia (Noland Arbaugh). By March 2024, Arbaugh was using the implant to control a computer cursor and play video games using thought alone. A second patient was implanted in mid-2024. As of early 2026, Neuralink has reported successful long-term use in its initial patients, though the company has disclosed that some threads retracted from the cortex after implantation in the first patient, reducing the number of functional channels.4

Key idea: The engineering challenge of neural interfaces isn't just "can we detect neural signals" — that was solved decades ago. The challenge is doing it at scale (thousands of channels), chronically (for years without degradation), and safely (without damaging the brain or triggering immune rejection). Neuralink's contribution is primarily engineering: the threads, the robot, the sealed implant, the wireless link. The neuroscience of decoding brain signals uses established methods.

What's Been Achieved

Taking stock of the state of the art across all BCI approaches:

Capability Status How
Cursor control Demonstrated in multiple patients Motor cortex decoding (BrainGate, Neuralink)
Robotic arm control Demonstrated (reach and grasp) Motor cortex decoding (BrainGate, APL)
Handwriting decoding 90 chars/min, 94% accuracy Motor cortex imagined handwriting (BrainGate/Stanford)
Speech decoding 62–78 words/min in paralyzed patients Speech motor cortex (Stanford/UCSF, ECoG and Utah arrays)5
Emotional state detection Rudimentary (valence classification) Deep brain recordings, limited to research settings
Memory enhancement Early research stage Hippocampal stimulation based on detected neural patterns

The speech decoding results are particularly striking. In 2023, researchers at Stanford and UCSF published work on decoding attempted speech from a patient with ALS who had lost the ability to speak. Using a high-density electrode array on the speech motor cortex and a recurrent neural network decoder, they achieved real-time speech decoding at rates approaching natural conversation speed. The decoder didn't directly detect words — it detected the motor intentions for mouth, tongue, and jaw movements and mapped those to phonemes, which were then assembled into words.

Bidirectional Interfaces: Reading and Writing

All the examples above involve reading from the brain. Writing to the brain — stimulating neural tissue to create perceptions, sensations, or cognitive effects — is less mature but has established applications:

Cochlear Implants

The most successful neural interface in history. Over 1 million people worldwide have cochlear implants, which bypass damaged hair cells in the inner ear and directly stimulate the auditory nerve with electrical signals. A cochlear implant has 12–22 electrode channels, compared to the ~3,500 inner hair cells they replace — a dramatic reduction in bandwidth that nonetheless enables speech comprehension in most recipients. The success of cochlear implants demonstrates that the brain can learn to interpret crude electrical stimulation as meaningful information, given time and training.

Deep Brain Stimulation (DBS)

Electrodes implanted deep in the brain deliver continuous electrical pulses to specific nuclei. DBS is an FDA-approved treatment for Parkinson's disease (stimulating the subthalamic nucleus to reduce tremor), essential tremor, and dystonia. It's also being investigated for treatment-resistant depression (stimulating area 25 or the ventral capsule/ventral striatum), OCD, and epilepsy.

DBS works, but we don't fully understand why. The prevailing theory is that the stimulation disrupts pathological neural oscillations — rhythmic firing patterns that underlie symptoms like tremor — but the precise mechanism is still debated. This illustrates a broader point: we can affect brain function with electrical stimulation without understanding the neural code well enough to write arbitrary information.

Retinal Prostheses

Devices like the Argus II (now discontinued by its manufacturer) stimulated the retina with electrode arrays to provide rudimentary vision to people with retinitis pigmentosa. The resolution was extremely low — 60 electrodes, compared to ~6 million cones in the retina — providing perception of light patterns and edges rather than detailed vision. Higher-resolution retinal and cortical visual prostheses are in development.

The Convergence Question

If you extrapolate current trends — more electrodes, better decoding algorithms, bidirectional communication — you arrive at a provocative question: at what point does the distinction between biological and artificial intelligence become blurry?

Consider a hypothetical progression:

  1. Current state: decode motor intentions, provide sensory substitution (cochlear implants). The interface is narrow — limited to specific modalities.
  2. Near-term (5–15 years, speculative): decode complex intentions, restore speech in paralyzed patients, provide higher-bandwidth sensory feedback. The interface widens but remains focused on restoration.
  3. Mid-term (15–30 years, highly speculative): direct cognitive augmentation — offloading memory to external storage, accelerating certain computations, accessing information directly without sensory mediation.
  4. Long-term (speculative): high-bandwidth bidirectional interfaces that create a seamless connection between biological neural computation and external AI systems.

Steps 3 and 4 are deeply speculative and face challenges that may be fundamental rather than merely engineering problems. The brain's "code" — how information is represented in neural activity — is not well understood at the level that would be required for arbitrary read/write access. We can decode motor intentions because motor cortex has relatively straightforward neural correlates (neurons that fire for specific movements). Decoding abstract thoughts, memories, or concepts is a categorically harder problem.

The Bandwidth Gap Human Brain ~86 billion neurons ~100 trillion synapses ~1014 synaptic operations/sec Bandwidth: effectively unlimited internally BCI 1K ch External AI Arbitrary compute available Arbitrary memory available LLMs, agents, databases Bandwidth: limited by interface, not compute Current: ~1,000 channels Brain has: ~86,000,000,000 neurons The interface is the bottleneck by a factor of ~107

Current Limitations

The gap between the science fiction vision and the engineering reality is vast:

Ethical Considerations

Neural interfaces raise ethical questions that don't arise with any other technology, because they operate on the organ that constitutes personal identity:

Key idea: Neural interfaces are the most direct path to converging biological and artificial intelligence. But the convergence is currently constrained by a bandwidth gap of roughly seven orders of magnitude between what we can record and what the brain actually computes. Closing this gap is an engineering challenge on the scale of decades, not years — and the ethical frameworks for what we'd do with that capability are even less developed than the technology itself.

This chapter and the previous three have explored the frontier in different dimensions: alternative learning paradigms (Chapters 21–23), a different computational substrate (Chapter 24), and now a direct bridge between biological and artificial systems. The next chapter steps back and asks: given everything covered in this guide, what's the actual gap between current AI and the vision of a system that genuinely learns and grows through experience?

Next: Chapter 26 — The Gap. What exists vs. what would need to exist. Growing architecture, survival drive, persistent online learning — and an honest assessment of how far away we are.

1 The Utah array was developed by Richard Normann at the University of Utah in the 1990s. It remains the most widely used intracortical recording device in human BCI research.

2 Hochberg, L.R. et al. (2006). "Neuronal ensemble control of prosthetic devices by a human with tetraplegia." Nature 442:164–171.

3 Willett, F.R., Avansino, D.T., Hochberg, L.R., Henderson, J.M., and Shenoy, K.V. (2021). "High-performance brain-to-text communication via handwriting." Nature 593:249–254.

4 Neuralink's first human implant (the PRIME study) was reported through FDA Breakthrough Device updates and company blog posts. The thread retraction issue was disclosed by Neuralink in May 2024; the company adjusted its approach for subsequent implants.

5 Metzger, S.L. et al. (2023). "A high-performance neuroprosthesis for speech decoding and avatar control." Nature 620:1037–1046. Willett, F.R. et al. (2023). "A high-performance speech neuroprosthesis." Nature 620:1031–1036.