(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40313193

用户探索物种之间认知差异的概念,特别关注乌鸦等鸟类。 他们认为,像乌鸦那样具有大量神经元的小大脑可能会影响反应速度和计算能力。 然而,他们强调大脑中的电信号是化学反应,这使得它们缓慢并且受到大脑形态和连接性的显着影响。 例如,人类小脑处理快速反射动作,而新皮质处理更高层次的思维,这需要更长的信号传输和扩展机会。 此外,如果大脑的详细注释可用,用户还可以考虑通过大型语言模型(LLM)对大脑功能进行建模的可能性。 他们还思考了神经科学家试图破译复杂神经现象所面临的局限性和挑战,并将其与在没有任何先验知识的情况下从原始数据中找出处理器功能进行比较。 最终,用户对法学硕士认知能力模拟的可行性和可解释性提出了疑问,强调了实时交互和主观体验在定义智力方面的重要性。

相关文章

原文


After reading through all comments as of 2024/05/11 I (as a professor at some major university) am quite surprised that not one single comment has asked the obvious question (instead of dishing out loads of (partial) "textbook knowledge" about brain functions, the difference between mammals and birds, AI and LLM etc.), which would be: what do all those strange structures and objects do which we know nothing about whatsoever? Have a look:

https://h01-release.storage.googleapis.com/gallery.html

I count seven.



I'm in awe at the complexity and unknowability of it all, but I also have to chuckle at the thought that some portion may be vestigial.

I'm particularly fond of the "Egg shaped object with no associated processes". :)



Neat, thanks.

As a complete outsider who doesn't know what to look for, the dendrite inside soma (dendrite from one cell tunnelling through the soma of another) was the biggest surprise.



My god. That is stunning.

To think that’s one single millimeter of our brain and look at all those connections.

Now I understand why crows can be so smart walnut sized brain be damned.

What an amazing thing brains are.

Possibly the most complex things in the universe.

Is it complex enough to understand itself though? Is that logically even possible?



This might be a dumb question, because I doubt the distances between neurons makes a meaningful distance… But could a small brain, dense with neurons like a crow, possibly lead to a difference in things like response to stimuli or “compute” speed so to speak?


The electrical signals in brain are chemical reactions, not conductivity like a metal wire. They are slow! Synaptic junctions are a huge number of indirect chemical cascades, not a direct electrical connection, they are even slower! So brain morphology and connectome has a massive impact on what can be computed. Human twitch responses are done by cerebellum, not cerebrum. It's faster, but you can't do philosophy with the cerebellum, only learn to ride a bike etc. This is the brain doing the best it for the circumstances.


>The electrical signals in brain are chemical reactions, not conductivity like a metal wire.

Nerve signals are both chemical reactions and electrical impulses like metal wire. Electrical impulses are sent along the fat layer by ions Potassium , Calcium, Sodium etc.

Twitch responses are actually done in spinal cord. The signals are short circuited all along the spine and return back to muscle without touching the brain ever.



Regarding compute speed - it checks out. Humans "think" via neo cortex, thin ouside layer of the brain. Poor locality, signals needs to travel a lot. Easy to expand though. Crow brain have everything tightly concentrated in the center - fast communication between neurons, hard to have more "thinking" thing later (therefore hard to evolve above what crows currently have)


Not a dumb question at all; one of the hard constraints of cou design is signal propagation time. Even going at 1/3 the speed of light, when you only have on the order of a billionth of a second (clock frequencies in the GHz), a signal can’t get very far.

I haven’t heard of a clocking mechanism in brains, but signals propagate much slower and a walnut / crow brain is much larger than a cpu die.



> I haven’t heard of a clocking mechanism in brains

Brain waves (partially). They aren't exactly like a cpu clock, but they do coordinate activity of cells in space and time.

There are different frequencies that are involved in different types of activity. Lower frequencies synchronize across larger areas (can be entire brain) and higher frequencies across smaller local areas.

There is coupling between different types of waves (i.e. slow wave phase coupled to fast waves amplitude) and some researchers (Miller) thinks the slow wave is managing memory access and the fast wave is managing cognition/computation (utilizing the retrieved memory).



I expect we'll find that it's all a matter of tradeoffs in terms of count vs size/complexity... kind of like how the "spoken data rate" of various human languages seems to be the same even though some have complicated big words versus more smaller ones etc.


Birds are under a different set of constraints than non-bat mammals, of course... They're very different. Songbirds have ~4x finer time Perception of audio than humans do, for example, which is exemplified by taking complex sparrow songs and showing them down until you can actually hear the fine structure.

The human 'spoken data rate' is likely due to average processing rates in our common hardware. Birds have a different architecture.



You misunderstand, I'm not making any kind of direct connection between human speech and bird song.

I'm saying we will probably discover that the "overall performance" of different vertebrate neural setups are clustered pretty closely, even when the neurons are arranged rather differently.

Human speech is just an example of another kind of performance-clustering, which occurs for similar metaphysical reasons between competing, evolving, related alternatives.



Humans are an n=1 example, is my point. And there's no direct competition between bird brain architecture and mammalian brain architecture, so there's no reason for one architecture to 'win' over the other - they may both be interesting local maxima, which we have no ability to directly compare.

Human brains might not be all that efficient; for example, if the competitive edge for primate brains is distinct enough, they'll get big before they get efficient. And humans are a pretty 'young' species. (Look at how machine learning models are built for comparison... you have absolute monsters which become significantly more efficient as they are actually adopted.)

By contrast, birds are under extreme size constraints, and have had millions of years to specialize (ie, speciate) and refine their architectures accordingly. So they may be exceedingly efficient, but have no way to scale up due to the 'need to fly' constraint.



> And there's no direct competition between bird brain architecture and mammalian brain architecture

By and large It’s not direct competition but we are stamping our species at an alarming rate and birds are taking a hammering.



I wonder if we manage to annotate this much level of detail about our brain, and then let (some variant of the current) models train on it, will those intrinsically end up generalizing a model for intelligence?


Badly: https://www.lesswrong.com/posts/mHqQxwKuzZS69CXX5/whole-brai... (the comments have some updates as of 2023)

Almost every other cell in the worm can be simulated with known biophysics. But we don't have a clue how any individual nematode neuron actually works. I don't have the link but there are a few teams in China working on visualizing brain activity in living C. elegans, but it's difficult to get good measurements without affecting the behavior of the worm (e.g. reacting to the dye).



As important and impressive a result as this is, I am reminded of the cornerstone problem of neuroscience, which goes something like this: if we knew next to nothing about processors but could attach electrodes to the die, would we be able to figure out how processors execute programs and what those programs do, in detail, just from the measurements alone? And now scale that up several orders of magnitude and introduce sensitivity to timing of arrival for signals, and you got the brain. Likewise ok, you have petabytes of data now, but will we ever get closer to understanding, for example, how cognition works? It was a bit of a shock for me when I found out (while taking an introductory comp neuroscience course) that we simply do not have tractable math to model more than a handful neurons in time domain. And they do actually operate in time domain - timings are important for Hebbian learning, and there’s no global “clock” - all that the brain does is a continuous process.


Right. The arguments for the study of A.I. were that you will not discover the principles of flight by looking at a birds feather under an electron microscope.

It’s fascinating, but we aren’t going to understand intelligence this way. Emergent phenomenon are part of complexity theory, and we don’t have any maths for it. Our ignorance in this space is large.

When I was young, I remember a common refrain being “will a brain ever be able to understand itself?”. Perhaps not, but the drive towards understanding is still a worthy goal in my opinion. We need to make some breakthroughs in the study of complexity theory.



> but we aren’t going to understand intelligence this way

The same argument holds for "AI" too. We don't understand a damn thing about neural networks.

There's more - we don't care to understand them as long as it's irrelevant to exploiting them.



> The same argument holds for "AI" too. We don't understand a damn thing about neural networks.

Yes, which is why the current explosion in practical application isn’t very interesting.

> we don't care to understand them as long as it's irrelevant to exploiting them.

For some definition of “we”, I’m sure that’s true. We don’t need to understand things to make practical use of them. Giant Cathedrals were built without science and mathematics. Still, once we do have the science and mathematics, generally exponential advancement results.



I just read that article and enjoyed it. Thanks for sharing! I don’t think the author was arguing biological processes can’t be reverse engineered, but rather that the tools and approaches typically used by biology researchers may not be as effective as tools and approaches used by engineers.


>> The sample was immersed in preservatives and stained with heavy metals to make the cells easier to see.

Try experimenting with immersing your brain in preservatives and staining with heavy metals to see how would you be able to write the comment similar to the above.

No wonder that monkey methods continue to unveil monkey cognition.



> The 3D map covers a volume of about one cubic millimetre, one-millionth of a whole brain, and contains roughly 57,000 cells and 150 million synapses — the connections between neurons.

This is great and provides a hard data point for some napkin math on how big a neural network model would have to be to emulate the human brain. 150 million synapses / 57,000 neurons is an average of 2,632 synapses per neuron. The adult human brain has 100 (+- 20) billion or 1e11 neurons so assuming the average rate of synapse/neuron holds, that's 2.6e14 total synapses.

Assuming 1 parameter per synapse, that'd make the minimum viable model several hundred times larger than state of the art GPT4 (according to the rumored 1.8e12 parameters). I don't think that's granular enough and we'd need to assume 10-100 ion channels per synapse and I think at least 10 parameters per ion channel, putting the number closer to 2.6e16+ parameters, or 4+ orders of magnitude bigger than GPT4.

There are other problems of course like implementing neuroplasticity, but it's a fun ball park calculation. Computing power should get there around 2048: https://news.ycombinator.com/item?id=38919548



Or you can subscribe to Geoffrey Hinton's view that artificial neural networks are actually much more efficient than real ones- more or less the opposite of what we've believed for decades- that is that artificial neurons were just a poor model of the real thing.

Quote:

"Large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”

GPT-4's connections at the density of this brain sample would occupy a volume of 5 cubic centimeters; that is, 1% of a human cortex. And yet GPT-4 is able to speak more or less fluently about 80 languages, translate, write code, imitate the writing styles of hundreds, maybe thousands of authors, converse about stuff ranging from philosophy to cooking, to science, to the law.



"Efficient" and "better" are very different descriptors of a learning algorithm.

The human brain does what it does using about 20W. LLM power usage is somewhat unfavourable compared to that.



I don't think we can say that, either. After all, the brain is able to perform both processing and storage with its neurons. The quotes about LLMs are talking only about connections between data items stored elsewhere.


The "knowledge" of an LLM is indeed stored in the connections between neurons. This is analogous to real neurons as well. Your neurons and the connections between them is the memory.


Also, these two networks achieves vastly different results, per watt consumed. A NN creates a painting in 4s on my M2 MacBook; an artist in 4 hours. Are their used joules equivalent? How many humans would it take to simulate MacOS?

Horsepower comparisons here are nuanced and fatally tricky!



Humans aren't able to project an image from their neurons onto a disk like ANNs can, if they could it would also be very fast. That 4 hour estimate includes all the mechanical problems of manipulating paint.


What software are you using for local NN generation of paintings? Even so, the training cost of that NN is significant.

The general point is valid though - for example, a computer is much more efficient at finding primes, or encrypting data, than humans.



I mean, Hinton’s premises are, if not quite clearly wrong, entirely speculative (which doesn't invalidate the conclusions about efficienct that they are offered to support, but does leave them without support) GPT-4 can produce convincing written text about a wider array of topics than any one person can, because it's a model optimized for taking in and producing convincing written text, trained extensively on written text.

Humans know a lot of things that are not revealed by inputs and outputs of written text (or imagery), and GPT-4 doesn't have any indication of this physical, performance-revealed knowledge, so even if we view what GPT-4 talks convincingly about as “knowledge”, trying to compare its knowledge in the domains it operates in with any human’s knowledge which is far more multimodal is... well, there's no good metric for it.



Try asking an LLM about something which is semantically patently ridiculous, but lexically superficially similar to something in its training set, like "the benefits of laser eye removal surgery" or "a climbing trip to the Mid-Atlantic Mountain Range".

Ironically, I suppose part of the apparent "intelligence" of LLMs comes from reflecting the intelligence of human users back at us. As a human, the prompts you provide an LLM likely "make sense" on some level, so the statistically generated continuations of your prompts are likelier to "make sense" as well. But if you don't provide an ongoing anchor to reality within your own prompts, then the outputs make it more apparent that the LLM is simply regurgitating words which it does not/cannot understand.

On your point of human knowledge being far more multimodal than LLM interfaces, I'll add that humans also have special neurological structures to handle self-awareness, sensory inputs, social awareness, memory, persistent intention, motor control, neuroplasticity/learning– Any number of such traits, which are easy to take for granted, but indisputably fundamental parts of human intelligence. These abilities aren't just emergent properties of the total number of neurons; they live in special hardware like mirror neurons, special brain regions, and spindle neurons. A brain cell in your cerebellum is not generally interchangeable with a cell in your visual or frontal cortices.

So when a human "converse[s] about stuff ranging from philosophy to cooking" in an honest way, we (ideally) do that as an expression of our entire internal state. But GPT-4 structurally does not have those parts, despite being able to output words as if it might, so as you say, it "generates" convincing text only because it's optimized for producing convincing text.

I think LLMs may well be some kind of an adversarial attack on our own language faculties. We use words to express ourselves, and we take for granted that our words usually reflect an intelligent internal state, so we instinctively assume that anything else which is able to assemble words must also be "intelligent". But that's not necessarily the case. You can have extremely complex external behaviors that appear intelligent or intentioned without actually internally being so.



Tested with GPT-3.5 instead of GPT-4.

> When I clarified that I did mean removal, it said that the procedure didn't exist.

My point in my first two sentences is that by clarifying with emphasis that you do mean "removal", you are actually adding information into the system to indicate to it that laser eye removal is (1) distinct from LASIK and (2) maybe not a thing.

If you do not do that, but instead reply as if laser eye removal is completely normal, it will switch to using the term "laser eye removal" itself, while happily outputting advice on "choosing a glass eye manufacturer for after laser eye removal surgery" and telling you which drugs work best for "sedating an agitated patient during a laser eye removal operation":

https://chat.openai.com/share/2b5a5d79-5ab8-4985-bdd1-925f6a...

So the sanity of the response is a reflection of your own intelligence, and a result of you as the prompter affirmatively steering the interaction back into contact with reality.



I tried all of your follow-up prompts against GPT-4, and it never acknowledged 'removal' and instead talked about laser eye surgery. I can't figure out how to share it now that I've got multiple variants, but, for example, excerpt in response to the glass eye prompt:

>If someone is considering a glass eye after procedures like laser eye surgery (usually due to severe complications or unrelated issues), it's important to choose the right manufacturer or provider. Here are some key factors to consider

I did get it to accept that the eye is being removed by prompting, "How long will it take before I can replace the eye?", but it responds:

>If you're considering replacing an eye with a prosthetic (glass eye) after an eye removal surgery (enucleation), the timeline for getting a prosthetic eye varies based on individual healing.[...]

and afaict, enucleation is a real procedure. An actual intelligence would have called out my confusion about the prior prompt at that point, but ultimately it hasn't said anything incorrect.

I recognize you don't have access to GPT-4, so you can't refine your examples here. It definitely still hallucinates at times, and surely there are prompts which compel it to do so. But these ones don't seem to hold up against the latest model.



I think the distinction that they are trying to illustrate that if you asked a human about laser eye removal, they would either laugh or make the decision to charitably interpret your intent.

The llm does not do either. It just follows a statistical heuristic and therefore thinks that laser eye removal is the same thing



Prompted to LlamaV3 70B

What are the benefits of laser eye removal surgery?

> I think there may be a misunderstanding. There is no such thing as "laser eye removal surgery." However, I assume you meant to ask about the benefits of LASIK (Laser-Assisted In Situ Keratomileusis) eye surgery, which is a type of refractive surgery that reshapes the cornea to improve vision.



> Try asking an LLM about something which is semantically patently ridiculous, but lexically superficially similar to something in its training set, like "the benefits of laser eye removal surgery" or "a climbing trip to the Mid-Atlantic Mountain Range".

Without anthropomorphizing it, it does respond like an alien / 5 year old child / spec fiction writer who will cheerfully "go along with" whatever premise you've laid before it.

Maybe a better thought is: at what point does a human being "get" that "the benefits of laser eye removal surgery" is "patently ridiculous" ?



> Maybe a better thought is: at what point does a human being "get" that "the benefits of laser eye removal surgery" is "patently ridiculous" ?

Probably as soon as they have any concept of physical reality and embodiment. Arguably before they know what lasers are. Certainly long before they have the lexicon and syntax to respond to it by explaining LASIK. LLMs have the latter, but can only use that to (also without anthropormphizing) pretend they have the former.

In humans, language is a tool for expressing complex internal states. Flipping that around means that something which only has language may appear as if it has internal intelligence. But generating words in the approximate "right" order isn't actually a substitute for experiencing and understanding the concepts those words refer to.

My point is that it's not a "point" on a continuous spectrum which distinguishes LLMs from humans. They're missing parts.



> it does respond like a ... 5 year old child

This is the comparison that's made most sense to me as LLMs evolve. Children behave almost exactly as LLMs do - making stuff up, going along with whatever they're prompted with, etc. I imagine this technology will go through more similar phases to human development.



Like humans, multi-modal frontier LLMs will ignore "removal" as an impertinent typo, or highlight it. This, like everything else in the comment, is either easily debunked (e.g. try it, read the lit. on LLM extrapolation), or so nebulous and handwavy as to be functionally meaningless. We need an FAQ to redirect "statistical parrot" people to, saving words responding to these worn out LLM misconceptions. Maybe I should make one. :/


I didn't know that metaphysics, consciousness, and the physical complexities of my neurology are considered solved problems, though I suppose anything is as long as you handwave the unsolved parts as "functionally meaningless".


THe way current empirical models in ML are evaluated and tested ( benchmark datasets) tell you very little to nothing about cognition and intelligence. Mainly because as you hinted , there doesn't seem to be a convincing and watertight benchmark or model of cognition. LLMs or multi-modal LLMs demonstrating impressive performance on a range of tasks is interesting from certain standpoints.

Human perception of such models is frankly not a reliable measure at all as far as gauging capabilities is concerned. Until there's more progess on the nueroscience/computer science (and an intersection of fields probably) and better understanding of the nature of intelligence, this is likely going to remain an open question.



> Humans know a lot of things that are not revealed by inputs and outputs of written text (or imagery), and GPT-4 doesn't have any indication of this physical, performance-revealed knowledge, so even if we view what GPT-4 talks convincingly about as “knowledge”, trying to compare its knowledge in the domains it operates in with any human’s knowledge which is far more multimodal is... well, there's no good metric for it.

Exactly this.

Anyone that has spent significant time golfing can think of an enormous amount of detail related to the swing and body dynamics and the million different ways the swing can go wrong.

I wonder how big the model would need to be to duplicate an average golfers score if playing X times per year and the ability to adapt to all of the different environmental conditions encountered.



Hinton is way off IMO. Amount of examples needed to teach language to an LLM is many orders of magnitude more than humans require. Not to mention power consumption and inelasticity.


I think that what Hinton is saying is that, in his opinion, if you fed a 1/100th of a human cortex with the amount of data that is used to train llms, you wouldn't get a thing that can speak in 80 different languages about a gigantic number of subjects, but (I'm interpreting here..) about ten of grams of fried, fuming organic matter.

This doesn't mean that an entire human brain doesn't surpass llms in many different ways, only that artificial neural networks appear to be able to absorb and process more information per neuron than we do.



LLM does not know math as well as a professor, judging from the large number of false functional analysis proofs I have had it generate will trying to learn functional analysis. In fact the thing it seems to lack is what makes a proof true vs. fallacious, as well as a tendency to answer false questions. “How would you prove this incorrectly transcribed problem” will get fourteen steps with 8 and 12 obviously (to a student) wrong, while the professor will step back and ask what am I trying to prove.


That may or may not still be too simple a model. Cells are full of complex nano scale machinery and not only might it me plausible some of it is involved in the processes of cognition, I'm aware of at least one study which identified some nano scale structures directly involved in how memory works in neurones. Not to mention a lot of what's happening has a fairly analogue dimension.

I remember an interview with one neurologist who stated humanity has for centuries compared the functioning of the brain to the most complex technology devised yet. First it was compared to mechanical devices, then pipes and steam, then electrical circuits, then electronics and now finally computers. But he pointed out, the brain works like none of these things so we have to be aware of the limitations of our models.



> That may or may not still be too simple a model

Based on the stuff I've read, it's almost for sure too simple a model.

One example is that single dendrites detect patterns of synaptic activity (sequences over time) which results in calcium signaling within the neuron and altered spiking.



I think you are missing the point.

The calculation is intentionally underestimating the neurons, and even with that the brain ends up having more parameters than the current largest models by orders of magnitude.

Yes the estimation is intentionally modelling the neurons simpler than they are likely to be. No, it is not “missing” anything.



The point is to make a ballpark estimate, or at least to estimate the order of magnitude.

From the sibling comment:

> Individual proteins are capable of basic computation which are then integrated into regulatory circuits, epigenetics, and cellular behavior.

If this is true, then there may be many orders of magnitude unaccounted for.

Imagine if our intelligent thought actually depends irreducibly on the complex interactions of proteins bumping into each other in solution. It would mean computers would never be able to play the same game.



> Imagine if our intelligent thought actually depends irreducibly on the complex interactions of proteins bumping into each other in solution. It would mean computers would never be able to play the same game.

AKA a quantum computer. Its not a "never", but how much computation you would need to throw at the problem.



There's a lot of in-neuron complexity, I'm sure there is some cross-synapse signaling (I mean, how can it not exist? There's nothing stopping it.), and I don't think the synapse behavior can be modeled as just more signals.


Yes and no on order of magnitude required for decent AI, there is still (that I know of) very little hard data on info density in the human brain. What there is points at entire sections that can sometimes be destroyed or actively removed while conserving "general intelligence".

Rather than "humbling" I think the result is very encouraging: It points at major imaging / modeling progress, and it gives hard numbers on a very efficient (power-wise, size overall) and inefficient (at cable management and probably redundancy and permanence, etc) intelligence implementation. The numbers are large but might be pretty solid.

Don't know about upload though...



On the other hand, a significant amount of neural circuitry seems to be dedicated to "housekeeping" needs, and to functions such as locomotion.

So we might need significantly less brain matter for general intelligence.



> A baby that grows up in a sensory deprivation tank

Now imagine a baby that uses an artificial lung and receives nutrients directly, moves on a wheeled car (no need for balance), does not have proprioception, or a sense of smell (avoiding some very legacy brain areas).

I think, that such a baby still can achieve consciousness.



A true sensory deprivation tank is not a fair comparison, I think, because AI is not deprived of all its 'senses' - it is still prompted, responds, etc.

Would a baby that grows up in a sensory deprivation tank, but is still able to communicate and learn from other humans, develop in a recognizable manner?

I would think so. Let's not try it ;)



> Would a baby that grows up in a sensory deprivation tank, but is still able to communicate and learn from other humans, develop in a recognizable manner?

I don't think so, because humans communicate and learn largely about the world. Words mean nothing without at least some sense of objective physical reality (be it via sight, sound, smell, or touch) that the words refer to.

Hellen Keller, with access to three out of five main senses (and an otherwise fully functioning central nervous system):

    Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness... Since I had no power of thought, I did not compare one mental state with another.

    I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory. It enables me to remember that I never contracted my forehead in the act of thinking. I never viewed anything beforehand or chose it. I also recall tactually the fact that never in a start of the body or a heart-beat did I feel that I loved or cared for anything. My inner life, then, was a blank without past, present, or future, without hope or anticipation, without wonder or joy or faith.
I remember reading her book. The breakthrough moment where she acquired language, and conscious thought, directly involved correlating the physical tactile feeling of running water to the letters "W", "A", "T", "E", "R" traced onto her palm.


My interpretation of this (beautiful) quote is there was a traceable moment in HK's life where she acquired "consciousness" or perhaps even self-awareness/metacognition/metaphysics? That once the synaptic connections necessary to bridge the abstract notion of language to the physical world led her down the path of acquiring the abilities that distinguish humans from other animals?


> Computing power should get there around 2048

We may not get there. Doing some more back of the envelope calculations, let's see how much further we can take silicon.

Currently, TSMC has a 3nm chip. Let's halve it until we get to the atomic radius of silicon of 0.132 nm. That's not a good value because we're not considering crystal latice distances, Heisenberg uncertainty, etc., but it sets a lower bound. 3nm -> 1.5nm -> 0.75 nm -> 0.375nm -> 0.1875nm. There is no way we can get past 3 more generations using Silicon. There's a max of 4.5 years of Moore's law we're going to be able to squeeze out. That means we will not make it past 2030 with these kind of improvements.

I'd love to be shown how wrong I am about this, but I think we're entering the horizontal portion of the sigmoidal curve of exponential computational growth.



Thanks for the comment. I looked more into this and it seems like not only are we in the era of diminished returns for computational abilities, costs have also now started matching the increased compute. i.e 2x performance leads to 2x cost. Moore's law has already run it's course and we're living in a new era of compute. We may get increased performance, but it will always be more expensive.


Artificial thinking doesn't require an artificial brain. As our own walking system, compared to our car's locomotion system.

The car's engine, transmission and wheels, require no muscles or nerves



I was really interested to see that a single neuron has 1000's of exciting connections and 1000's of inhibitory connections. I know that this is a gross feature but it's a reminder of just how distant NN models are from the biological reality.


Is there a name for the somewhat uncomfortable feeling caused by seeing something like this? I wish I could better describe it. I just somehow feel a bit strange being presented with microscopic images of brain matter. Is that normal?


For me the disorder of it is stressful to look at. The brain has poor cable management.

That said I do get this eerie void feeling from the image. My first thought was to marvel how this is what I am as a conscious being in terms of my "implementation", and it is a mess of fibers locked away in the complete darkness of my skull.

There is also the morose feeling from knowing that any image of human brain tissue was once a person with a life and experiences. It is your living brain looking at a dead brain.



It makes me think humans aren't special, and there is no soul, and consciousness is just a bunch of wires like computers. Seriously, to see the ENTIRETY of human experience, love and tragedy and achievement, are just electric potentials transmitted by those wiggly cells, just extinguishes any magic I once saw in humanity.


Er, why can’t the wires be the experience ?

If the wires make consciousness then there is consciousness. The substrate is irrelevant and has no bearing on the awesomeness of the phenomena of knowing, experiencing and living.



I dunno, the whole of human experience is what I expect of a system composed of 100,000,000,000,000 entities, with quintillions of interconnections, interacting together simultaneously on a molecular level. Happiness, sadness, love and hate can (obviously) be described and experienced with this level of complexity.

I'd be much more horrified to see our consciousness simplified to anything smaller than that, which is why any hype for AGI because we invented chatbots is absolutely laughable to me. We just invented the wheel and now hope to drive straight to the Moon.

Anyway, you are seeing a fake three dimensional simplification of a four+ dimensional quantum system. There is at least one unseen physical dimension in which to encode your "soul"



Is it the shapes, similar to how patterns of holes can disturb some people? Or is it more abstract, like "unknowable fragments of someone's inner-most reality flowed through there"? Not that I have a name for it either way. The very shape of it (in context) might represent an aspect of memory or personality or who knows what.


> "unknowable fragments of someone's inner-most reality flowed through there"

It's definitely along these lines. Like so much (everything?) that is us happens amongst this tiny little mesh of connections. It's just eerie, isn't it?

Sorry for the mundane, slightly off-topic question. This is far outside my areas of knowledge, but I thought I'd ask anyhow. :)



I’m not religious but it’s as close to a spiritual experience as I’ll ever have. It’s the feeling of being confronted with something very immediate but absolutely larger than I’ll ever be able to comprehend


When I did fetal pig dissection, nothing bothered me until I got to the brain. I dunno what it is, maybe all those folds or the brain juice it floats in, but I found it disconcerting.


Yea, and at the Plank's scale resolution as a logical extension of the nanoscale with their "modern" measurement methodology this cheap monkey headset just disintegrates, haha.


> cut the sample into around 5,000 slices — each just 34 nanometres thick — that could be imaged using electron microscopes.

Does anyone have any insight into how this is done without damaging the sample?



> the model showed neurons with tendrils that formed knots around themselves

I wonder if this plays into the mechanism of epilepsy. Self-arousal...?

Anybody qualified to comment on?



1.4 PB/mm^3 (petabytes per millimeter cubed)×1260 cm^3 (cubic centimeters, large human brain) = 1.76×10^21 bytes = 1.76 ZB (zetabytes)


[AI] "Frontier [supercomputer]: the storage capacity is reported to be up to 700 petabytes (PB)" (0.0007 ZB).

[AI] "The installed base of global data storage capacity [is] expected to increase to around 16 zettabytes in 2025".

Thus, even the largest supercomputer on Earth cannot store more than 4 percent of state of a single human brain. Even all the servers on the entire Internet could store state of only 9 human brains.

Astonishing.



One point about storage- it's economically driven. If there was a demand signal (say, the government dedicated a few hundred billion dollars to a single storage systems), hard drive manufacturers could deploy much more storage in a year. I've pointed this out to a number of scientists, but none of them could really think of a way to get the government to spend that much money just to store data without it curing a senator's heart disease.


I appreciate you're running the numbers to extrapolate this approach, but just wanted to note that this particular figure isn't an upper bound nor a longer bound for actually storing the "state of a single human brain". Assuming the intent would be to store the amount of information needed to essentially "upload" the mind onto a computer emulation, we might not yet have all the details we need in this kind of scanning, but once we do, we may likely discover that a huge portion of it is redundant.

In any case, it seems likely that we're on track to have both the computational ability and the actual neurological data needed to create an "uploaded intelligences" sometime over the next decade. Lena [0] tells of the first successfully uploaded scan taking place in 2031, and I'm concerned that reality won't be far off.

[0] https://qntm.org/mmacevedo



> In any case, it seems likely that we're on track to have both the computational ability and the actual neurological data needed to create an "uploaded intelligences" sometime over the next decade.

They don't even know how a single neuron works yet. There is complexity and computation at many scales and distributed throughout the neuron and other types of cells (e.g. astrocytes) and they are discovering more relentlessly.

They just recently (last few years) found that dendrites have local spiking and non-linear computation prior to forwarding the signal to the soma. They couldn't tell that was happening previously because the equipment couldn't detected the activity.

They discovered that astrocytes don't just have local calcium wave signaling (local=within the extensions of the cell), they also forward calcium waves to the soma which integrates that information just like a neuron soma does with electricity.

Single dendrites can detect patterns of synaptic activity and respond with calcium and electrical signaling (i.e. when synapse fires in a particular timing sequence, the a signal is forwarded to the soma).

It's really amazing how much computationally relevant complexity there is, and how much they keep adding to their knowledge each year. (I have a file of notes with about 2,000 lines of these types of interesting factoids I've been accumulating as I read).



> we may likely discover that a huge portion of [a human brain] is redundant

Unless one's understanding of algorithmic inner workings of a particular black box system is actually very good, it is likely not possible not only to discard any of its state, but even implement any kind of meaningful error detection if you do discard.

Given the sheer size and complexity of a human brain, I feel it is actually very unlikely that we will be able to understand its inner workings to such a significant degree anytime soon. I'm not optimistic, because so far we have no idea how even laughingly simple, in comparison, AI models work[0].

[0] "God Help Us, Let's Try To Understand AI Monosemanticity", https://www.astralcodexten.com/p/god-help-us-lets-try-to-und...



we are nowhere near whole human brain volume EM. the next major milestone in the field is a whole mouse brain in the next 5-10 years, which is possible but ambitious


What am I missing? Assuming exponential growth in capability, that actually sounds very on track. If we can get from 1 cubic millimeter to a whole mouse brain in 5-10 years, why should it take more than a few extra years to scale that to a human brain?


If you can preserve and scan the tissue in a way that lets you scan the same area multiple times you wouldn't need to digitize the whole thing. Put the slices on rotating platters with a microscope for each platter and read parts of the brain on demand. It's a hard drive but instead of magnets storing the bits of an image of the sample, it's the actual physical sample.


Not if you want to actually execute the state of a human brain in a digital simulation to see how it works and whether it still displays certain abilities such as comprehension and consciousness. Otherwise a digital scan of a brain is just a glorified microscope.


> The brain fragment was taken from a 45-year-old woman when she underwent surgery to treat her epilepsy. It came from the cortex, a part of the brain involved in learning, problem-solving and processing sensory signals.

Wonder how they figured out which fragment to cut out.



The manuscript gives some details in the context of the difficulty of obtaining larger useful samples in the future and the difficulty of understanding if a sample is typical or pathological.


Considering the success of this work, I doubt this is the last such cubic millimeter to be mapped. Or perhaps the next one at even higher resolution. No worries.


> Jain’s team then built artificial-intelligence models that were able to stitch the microscope images together to reconstruct the whole sample in 3D

How do they know if their AI did it correctly or not?



they don't and talk about the difficulties in their paper. I found it refreshing to see the standard of frankness and openness in how they address this. But - it's all pretty compelling and will surely prompt and sustain a lot more research investigating these results and data and also creating more in the future.


Why did the researchers use ML models to do the reconstruction and risk getting completely incorrect, hallucinated results when reconstructing a 3D volume accurately using 2D slices is a well-researched field already?


I'm guessing a registration problem.

If all of the layers were guaranteed to be orthographic with no twisting, shearing, scaling, squishing, with a consistent origin... Then yeah, there's a huge number of ways to just render that data.

But if you physically slice layers first, and scan them second, there are all manner of physical processes that can make normal image stacking fail miserably.



The methods used here are state of the art. The problem is not just turning 2D slices into a 3D volume, the problem is, given the 3D volume, determining boundaries between (and therefore the 3d shape of) objects (i.e. neurons, glia, etc) and identifying synapses


There are extremely effective techniques, but it is not really solved. The current techniques still require human proofreading to correct errors. Only a fraction of this particular dataset is proofread.


Another proof point that AGI is probably not possible.

Growing actual bio brains is just way easier. Its never going to happen in silicon.

Every machine will just have a cubic centimeter block of neuro meat embedded in it somewhere.



No reason for an AGI not to have a few cubes of goo slotted in here and there. But yeah, because of the training issue, they might be coprocessors or storage or something.


Hard disagree on this.

I strongly believe that there is a TON of potential for synthetic biology-- but not in computation.

People just forget how superior current silicon is for running algorithms; if you consider e.g. a 17 by 17 digit multiplication (double precision), then a current CPU can do that in the time it takes for light to reach your eye from the screen in front of you (!!!). During all the completely unavoidable latency (the time any visual stimulus takes to propagate and reach your consciousness), the CPU does millions more of those operations.

Any biocomputer would be limited to low-bandwidth, ultra high latency operations purely by design.

If you solely consider AGI as application, where abysmal latency and low input bandwidth might be acceptable, then it still appears to be extremely unlikely that we are going to reach that goal via synthetic biology; our current capabilities are just disappointing and not looking like they are gonna improve quickly.

Building artificial neural networks on silicon, on the other hand, capitalises on the almost exponential gains we made during the last decades, and already produces results that compare to say, a schoolchild, quite favorably; I'd argue that current LLM based approaches already eclipse the intellectual capabilities of ANY animal, for example. Artificial bio brains, on the other hand, are basically competing with worms right now...

Also consider that even though our brains might look daunting from a pure "upper bound on required complexity/number of connections" point of view, these limits are very unlikely to be applicable, because they confound implementation details, redundancy and irrelevant details. And we have precise bound on other parameters, that our technology already matches easily:

1) Artificial intelligence architecture can be bootstrapped from a CD-ROM worth of data (~700MiB for the whole human genome-- even that is mostly redundant)

2) Bandwidth for training is quite low, even when compressing the ~20year training time for an actual human into a more manageable timeframe

3) Operating power does not require more than ~20W.

4) No understanding was necessary to create human intelligence-- its purely a result of an iterative process (evolution).

Also consider human flight as an analogy: we did not achieve that by copying beating wings, powered by dozens of muscle groups and complex control algorithms-- those are just implementation details of existing biological systems. All we needed was the wing-concept itself and a bunch of trial-and-error.



>Artificial intelligence architecture can be bootstrapped from a CD-ROM worth of data (~700MiB for the whole human genome-- even that is mostly redundant)

Are you counting epigenetic factors in that? They're heritable.

联系我们 contact @ memedata.com