![]() |
|
![]() |
|
Neat, thanks. As a complete outsider who doesn't know what to look for, the dendrite inside soma (dendrite from one cell tunnelling through the soma of another) was the biggest surprise. |
![]() |
|
Badly: https://www.lesswrong.com/posts/mHqQxwKuzZS69CXX5/whole-brai... (the comments have some updates as of 2023) Almost every other cell in the worm can be simulated with known biophysics. But we don't have a clue how any individual nematode neuron actually works. I don't have the link but there are a few teams in China working on visualizing brain activity in living C. elegans, but it's difficult to get good measurements without affecting the behavior of the worm (e.g. reacting to the dye). |
![]() |
|
> The 3D map covers a volume of about one cubic millimetre, one-millionth of a whole brain, and contains roughly 57,000 cells and 150 million synapses — the connections between neurons. This is great and provides a hard data point for some napkin math on how big a neural network model would have to be to emulate the human brain. 150 million synapses / 57,000 neurons is an average of 2,632 synapses per neuron. The adult human brain has 100 (+- 20) billion or 1e11 neurons so assuming the average rate of synapse/neuron holds, that's 2.6e14 total synapses. Assuming 1 parameter per synapse, that'd make the minimum viable model several hundred times larger than state of the art GPT4 (according to the rumored 1.8e12 parameters). I don't think that's granular enough and we'd need to assume 10-100 ion channels per synapse and I think at least 10 parameters per ion channel, putting the number closer to 2.6e16+ parameters, or 4+ orders of magnitude bigger than GPT4. There are other problems of course like implementing neuroplasticity, but it's a fun ball park calculation. Computing power should get there around 2048: https://news.ycombinator.com/item?id=38919548 |
![]() |
|
"Efficient" and "better" are very different descriptors of a learning algorithm. The human brain does what it does using about 20W. LLM power usage is somewhat unfavourable compared to that. |
![]() |
|
The "knowledge" of an LLM is indeed stored in the connections between neurons. This is analogous to real neurons as well. Your neurons and the connections between them is the memory.
|
![]() |
|
Tested with GPT-3.5 instead of GPT-4. > When I clarified that I did mean removal, it said that the procedure didn't exist. My point in my first two sentences is that by clarifying with emphasis that you do mean "removal", you are actually adding information into the system to indicate to it that laser eye removal is (1) distinct from LASIK and (2) maybe not a thing. If you do not do that, but instead reply as if laser eye removal is completely normal, it will switch to using the term "laser eye removal" itself, while happily outputting advice on "choosing a glass eye manufacturer for after laser eye removal surgery" and telling you which drugs work best for "sedating an agitated patient during a laser eye removal operation": https://chat.openai.com/share/2b5a5d79-5ab8-4985-bdd1-925f6a... So the sanity of the response is a reflection of your own intelligence, and a result of you as the prompter affirmatively steering the interaction back into contact with reality. |
![]() |
|
Hinton is way off IMO. Amount of examples needed to teach language to an LLM is many orders of magnitude more than humans require. Not to mention power consumption and inelasticity.
|
![]() |
|
Artificial thinking doesn't require an artificial brain.
As our own walking system, compared to our car's locomotion system. The car's engine, transmission and wheels, require no muscles or nerves |
![]() |
|
When I did fetal pig dissection, nothing bothered me until I got to the brain. I dunno what it is, maybe all those folds or the brain juice it floats in, but I found it disconcerting.
|
![]() |
|
Yea, and at the Plank's scale resolution as a logical extension of the nanoscale with their "modern" measurement methodology this cheap monkey headset just disintegrates, haha.
|
![]() |
|
> the model showed neurons with tendrils that formed knots around themselves I wonder if this plays into the mechanism of epilepsy. Self-arousal...? Anybody qualified to comment on? |
![]() |
|
1.4 PB/mm^3 (petabytes per millimeter cubed)×1260 cm^3 (cubic centimeters, large human brain) = 1.76×10^21 bytes = 1.76 ZB (zetabytes)
|
![]() |
|
> we may likely discover that a huge portion of [a human brain] is redundant Unless one's understanding of algorithmic inner workings of a particular black box system is actually very good, it is likely not possible not only to discard any of its state, but even implement any kind of meaningful error detection if you do discard. Given the sheer size and complexity of a human brain, I feel it is actually very unlikely that we will be able to understand its inner workings to such a significant degree anytime soon. I'm not optimistic, because so far we have no idea how even laughingly simple, in comparison, AI models work[0]. [0] "God Help Us, Let's Try To Understand AI Monosemanticity", https://www.astralcodexten.com/p/god-help-us-lets-try-to-und... |
![]() |
|
we are nowhere near whole human brain volume EM. the next major milestone in the field is a whole mouse brain in the next 5-10 years, which is possible but ambitious
|
![]() |
|
The manuscript gives some details in the context of the difficulty of obtaining larger useful samples in the future and the difficulty of understanding if a sample is typical or pathological.
|
![]() |
|
Considering the success of this work, I doubt this is the last such cubic millimeter to be mapped. Or perhaps the next one at even higher resolution. No worries.
|
![]() |
|
No reason for an AGI not to have a few cubes of goo slotted in here and there. But yeah, because of the training issue, they might be coprocessors or storage or something.
|
https://h01-release.storage.googleapis.com/gallery.html
I count seven.