The fact is you're both wrong: neither you nor anyone else is as logical as you think you are.
You think and feel with your brain all the time, but how often do you think about your brain; its strengths and weaknesses and its limitations?
Your brain is a battlefield peppered with electrochemical explosions; a wet bundle of nerves, firing at each other within a glue-like soup. It does some things well and others poorly.
Not only do you think with your brain, you also use it to perceive: it's the primary mechanism by which you collect information about the world around you. It's a bit like the fox guarding the henhouse: the same entity that provides you with information is also telling you what it means. Any information you take in -- through your eyes, nose, ears, tongue and fingertips -- is heavily filtered before you are even consciously aware of it.
This is a necessity: if you consciously processed every piece of information you are capable of perceiving, you would be so flooded with sensation that you would be unable to function. A lack of such filters is one of the primary characteristics of autism.
Now, think of your brain as if it were a computer for a second.
Your hardware is the bundle of nerves that makes up your brain; it's simply gray matter.
Your applications are patterns of thought, which are built up over the course of years. Some of them, like basic algebra and how to read, were written by others; and some of them, like the way you kiss or buy clothes, you probably wrote yourself. Some of them run like clockwork, others are riddled with bugs; some are in beta, others are in version 9.0. If you're a life hacker [What's a hacker?] you have probably written more of your own "brain apps" than most people.
Your OS is the low-lying software that all the other apps rely on. How much do you know about it? Most people don't think about it much.
If you want to get serious about communication, it's time to learn more about the Human OS.
Understanding how your mind works will make you a more effective communicator, so you'll know the path of least resistance to getting people's attention and getting them focused on the things you think are important. If you do it well, people will even start to think that you're logical!
In the coming weeks, I'll be posting more on the subject in a series called "Hacking the Human OS." It may become a book at some point. The goal is to give people practical, proven tips from people who have learned how to turn ideas into action by engaging and motivating others.
Map of the Human OS
How we learn
Everyone is welcome to the conversation: please, share your thoughts.
Keep in touch! Sign up to get updates and occasional emails from me.
This is a topic I found interesting enough topic to build my site around. However, it does suggest ideas few are comfortable with. For instance, you are going to have to find bridge terms and methodologies so engineering types can find some basis for implementation.
I find this fascinating as well. Of course, folks like William Gibson have kicked the ol' wetware idea around for a while. There are lots of analogies to have fun with: the kernel is the most primitive section of the brain, providing the truly low-level control necessary to sustain life, there are "frameworks" for generic activities, subclasses, inheritance, etc. Of course, it's important not to get carried away with modelitis.
One thing that can be said: given the overall patterning and function of the brain, as we know it, the human OS is as good an abstraction as anything. But yes, engineers will need the real connection between name and implementation to do the really powerful things.
It seems exploring this also ties into some great areas though. Color theory could be explained by understanding the color management in "your" OS. For instance, people with innate sense of taste use ColorSync, while those who make lousy color choices use whatever crappy system Windows uses...
Exactly right, though I prefer to talk about this in terms of a memory-prediction framework (a la Jeff Hawkins in On Intelligence), which makes the lack of inherent logic more clear than the OS metaphor. Basically, he suggests the mind functions through a set of sensory patterns passed up from the senses toward the "higher" cognitive areas of the brain with similar predictions about what should happen next moving down from the top to influence action. The point at which they intersect is experience (i.e., your feedback from the world and feedforward from your brain combine to form your reality). Moreover, any particular "thought" or "memory" is just a set of spatial-temporal patterns firing in the brain and resonating with patterns already there. Given this (brief) overview, saying we are not logical simply amounts to saying that our brains do not follow a linear progression from premises through clean calculations to a final conclusion, despite how many of us NT personality types might argue otherwise. They (brains) can't, they work by fuzzy association and parts of patterns triggering larger/complete patterns (Minsky and Schank weren't too far off after all). The use of "logic" is, therefore, an overlay of linguistic convenience to pare down the vastness of experience (Polanyi's "attend to") and construct both a message and a context of understanding. The context (a chain of reasoning) is important since we can't really encode/decode the way suggested by the transmission theory of communication. In reality it's more like providing an abstracted pattern of experience with enough detail that hopefully the audience can re-member something similar and construct a similar pattern in their mind. Plus, even when we think about a moment of our own thinking (or experience), the actual thinking is already past and we are reconstructing what we can remember about the event which brings with it, at least, unintentional misrepresentation and the coloring of our own predictions about such events.
This notion of mind-as-pattern-matcher rather than mind-as-reasoner overturns traditional rhetoric and argumentation (as well as a great many other communication theories). I'm currently dealing with this in my technical communication and rhetoric dissertation as part of what I call ObjectRhetoric.
Getting back to the OS metaphor...one of the better fits for the comparison seems to be Squeak, a cross-platform implementation of Smalltalk. The squeak "OS" even has a measure of neuroplasticity in its ability to rewrite itself from within (as you put it, lifehacks). "...the Squeak virtual machine can be edited and debugged by running it in Squeak itself." However, the breaking point for these metaphors is invariably memory, since computer memory is a static snapshot of data bits and human memory is always-already an act of reconstruction and composition.
I recently blogged on a similar topic, What's Your UI? You can find it here: http://www.jaredrichardson.net/blog/2005/10/17/
This is a great thread. It's exciting to see so many ideas and comments.
I hope this conversation continues to develop into a larger discussion.
My goal here is to take a very practical, common-sense, rule-of-thumb approach to the human OS issue: to take many of the things I have learned over the years as a professional communicator (working with salespeople, managers, etc. -- people who don't write books but know what works) and demonstrate how they can be valuable to others.
I want to avoid getting tangled up in metaphor (always a danger!), and stick to practical, proven "things that work."
So the metaphor is useful as a "big bucket" to contain the thoughts, but the idea is to help people "hack the human OS" to become better communicators and get better results from their interations with other people.
There needs to be a layer for things that are hardwired, but modyfiable, like emmotions, desires, basic drives like hunger, etc.
They might fit within the metaphor of the kernel, the most basic software that normally does not change, and interacts most closely with the hardware.
My own analysis of decision making and marketing is that the kernel and the low-level software surrounding it (emmotions) are the most important elements when making a decision or trying to convince someone of something. The rational mind just supplies the justification for what the primitive mind is going to decide regardless of reason.
This is how I see the process:
1. Input received (eyes, ears, etc.)
2. Initial reaction is through OS low-level software (emmotions). They tag the input with a base reaction (interest, desire, anger, fear).
3. Input and reaction are passed on to high-level software (language, memory, reason), which then tries to justify and rationalize the reaction.
4. In some cases, the high-level software will override the reaction if it finds sufficient contradictions.
5. High-level software saves the reaction and rationalization to disk (memory). If enough layers of association are added to a particular reaction, it becomes a filter that modifies the reaction (conditional training).
This is how "Symbol of Sex (attractive woman) juxtaposed with Symbol of Power (sprawled on sports car) --> Desire" gets translated into "You know, I've always been a fan of the Mustang, I'm so glad they made a new one." Which eventually leads to an attraction to the car itself, independant of the base desires that originally caused it.
Tracking down the real reason you've made decisions or done certain things is a crucial step to hacking your own OS.
don't forget to read this post too.
That's an interesting (and long) post. I don't agree 100%, but you raise some interesting points.
I will say I think some of the separation you talk about between subsystems is itself an illusion. The very metaphor you're creating to try and modify your own mind creates new limitations on your ability to change things, because it's still an artificial construct, just a new one.
Of course, the part of us that is "us" is an artificial construct itself, so anything we do is by nature going to be artificial. Can't escape the game while you're in the game, and all that.
I think there is value in using this as analogy, but one can only take it so far. We are really at an infancy in understanding our brains structurally, and may be using a language that needs an upgrade as it is.
I've been working on a model of consciousness you may be interested in: We are only ever aware of either the "future" or the "past", and there is no real present (try to put your finger on the present). The universe as we know it may be measured as frequencies, or levels of vibration within a boundary of space-time. Our consciousness must also be able to be measured as a frequency. It could be seen as a continuous wave with new information at the crest, and old at the trough. In between these is a reset period, or blink, when information is processed.
New info is added to a massive relational database. No new information has any meaning alone, except in how it relates to all the old information. The old information includes all prior sensory input in memory, plus genetic inheritance and much we are not yet aware of (epigenetics etc.); everything in the part of the universe distinguished as “ourself”.
Action actually happens at this point. The new information is compared with all the old information, a decision is made, and a command is issued. The physical body responds, is repositioned. Blink/reset. New information is again taken in. Rinse, repeat.
This happens at such a rapid pace that it makes up what we perceive to be as the "flow of consciousness". This flow, however, is less a river than a film strip in motion. We have one eye towards the future, and one towards the past. Only one is ever open at a time.
This wavelength is altered in times of "fight or flight", where the crest is larger, versus times of contemplation, where the trough takes precedence. And, of course, only a small portion of the information that comes in is experienced by us as consciousness or awareness.
What value is there in this way of thinking? I don’t know. Just something I’ve been chewing on lately. Let me know if you have any thoughts or resources.
Interesting thoughts Doug -- I will have to digest this a bit. Two great resources I can recommend immediately are:
Multimedia Learning by Richard Meyer and Consciousness Explained by Daniel Dennett.
A to B: Do you kno table of 2 ?
A: vhere you learnt it ?
B: In school ?
A: means you learnt after birth.It is application softvare vhich you learnt .
B: vhat is OS then?
A: your behaviour is your OS .
OS includes patience , temper seks fantassies (survival or decaying).
B: vhere 9 learnt this OS ?
A: During your parents seks , the vay they behaved became your behaviour (OS)
Post a Comment