The Absence of Language
- We have our transistors in the form of neurons
- We have motherboards, ram, power supply, a whole computer in the form of our brain
- We have programs in the form of the pre-existing algorithms in our brain that help us understand sight (into outlines of objects), filter different frequencies of soundwaves (to hear dialogues in a noisy room) and much more
- Lastly, we also have an internal clock in the form of the Cereblem to maintain our heartbeat and breathing.
Aside from integrating sight, touch, hearing, smell, taste together in the frontal lobe of the brain, (hypothesis) our brain has algorithms that simplify data before it reaches the "consciousness". We see outlines of objects and shadows, not light rays of different frequencies. I say there is a "pre-wired" algorithms written in neurons that parse light rays into outlines that could be understood by our brain. The list of algorithms that I have thought of are as below:
- Outlining algorithm
- (imaginative) 3d system (turn outline into objects in 3D, similar to Tesla auto-pilot (without predefined 3D models)
- Balance - sensing the muscle used to hold the body upright and figure out the relative gravity
- Distinctly separate different sound waves into voices, music, hums, etc.
- Algorithm of short-term memory to long-term [things in long-term are naturally deemed more useful as its repeated] -> maybe just as the type of storage used, similar to ram, and is not an algorithm
In the current state of AI, we mostly have a premature state for each one of the algorithms. The problem is that it does not have a generalized output, and one algorithm cannot communicate with others. Not to mention that each one of the algorithm is not polished yet. While AI can differentiate between objects, their input is of bits, and it is almost impossible for us to manipulate the neural net to our will.
|These notes are from last year, so if you are going to nitpick
I am just going to say these are not my full thoughts now.
Plus, this is just part of the story anyway.
It had occurred to me that I had not alluded to the title any bit. The title of this post is illustrating how we imagine scenarios from memory. My hypothesis is that the inputs to our vision not only includes our eyesight but our memory and logic (frontal lobe) as well. We take the memory of the inputs required for a certain image, and process that back into the input of our visual cortex, otherwise known as the occipital lob. This occipital lob would then transfer the output back into the frontal lobe to process the image and make it look real.
There is a possibility that it skips the occipital lobe all together and wire the output from the frontal lobe right back into it, but this is my hypothesis.
Other Parts of the brain:
- Reflexive response
- The human brain while are composed of a network of neurons like a neural net, has "hot wired" aspects that are evolved from evolution
- Our brain is more hard-wired then it is adaptive
|Same thing, this is from last year