Knowledge is a relative thing. Our books, works and collective knowledge are mere shadows of the truth of reality. Some are flat out wrong; some are misinformed and somewhat wrong, while others are mostly correct but in a form that is limiting to the true nature of the information. That which is correct, is usually just a shade of truth.
For instance, if I am looking at a multi-layer cake, not knowing it’s contents, saying that it is covered in frosting would be correct, but it neglects the internal layers; the filling, the ingredients, the baking process or how the cook obtained or came up with the recipe. It is simplistic. This lesson can be applied to almost all types of information recorded into the collective knowledge base which we humans rely on.
Another impossibly annoying aspect of our knowledge is it’s scattered and random consistency. When researching a particular subject, one is forced to spend nearly all of one’s time tracking down snippets in a dizzying array of distantly organized areas. Some bits of information are simply too small or too obscure to find and only when we randomly happen upon them are they found. This issue would not be nearly the obstacle it may sound, if all the information you did manage to track down were of a reliably, accurate and sound nature, but it is not. Once compounded with our original argument, the task of finding said information, then separating the wheat from the chaff, so to speak, can seem an insurmountable obstacle, especially when there is no consensus on what is supposed to be accurate and what is supposed to be incorrect.
In our current “information age” this problem is even further complicated due to a glut of data, thrown haphazardly into our knowledge base, consisting of mostly nothing of use.
One of the issues with this glut of data, would be the use of superfluous data. Take our previous cake example for instance. Instead of a lack of data about the cake, what if I researched it and found this description:
“This fine cake is quite tasty. The eggs used in it’s batter were laid by free range chickens at a farm in northeastern Wisconsin. The vanilla was produced from beans grown and processed in Brazil and has an expiration date two years from now. The frosting was a bit of a pain to make as the bowl slipped out of the mixer twice and I had to restart the batches, but it finally came out in the end. Oh, and my husband thought it tasted a bit too sweet at first but then decided it was merely because he’d just eaten something meaty. Overall, a great cake.”
Other than informing you that the cake is tasty and includes eggs and vanilla in the recipe, none of it was pertinent information. It was pointless chatter that served as an obstacle to gleaning those three tiny bits of data. Not that such superfluous data doesn’t have a place in the vast world of writing but it’s overuse in so many sources of information, when mixed with the reading of multiple sources each day, creates an overload of useless information that the brain must sort through in order to find the simplest of truths.
How then, do we divine any truth from this confounded mess? It is a conundrum that is not explored by current academics in any real manner.
Two dimensional thinking: It’s issues and uses:
What is two dimensional thinking versus three dimensional?
Two dimensional thinking implies concepts that are flat or only partially representative of the whole. Three dimensional thinking implies the first part of 2d thinking conjoined with intersecting dimensions rendering a deeper field of meaning.
In essence, 3D thinking represents not just one additional tangent of information but ALL relevant bits of data and their interrelation to the whole of the concept. In the above example, a true 3D representative of conceptual data would look more like a vast family tree than such an oversimplified illustration.
2D Thinking in Scientific Studies
We find this as an issue in medical studies. For example, Dr. Doe publishes a study concluding that caffeine contributes to obesity. (This is not an actual thing, thank God) Dr. Doe has used a randomized sample of 1000 study participants and tracked their weight over a ten year period. In that ten years, there was a statistical correlation between amount of coffee consumed and amount of weight gain. No one can criticize or defame said study as on it’s face, it is true. Those 1000 people did gain more weight the more coffee they drank.
Now, what you didn't know, were the following facts:
Dr. Doe first advertised for participants at a local business college, a computer science college and other related places with plenty of eager students. At the time he rationalized that those in college would make more reliable and eager study participants. 97% of his participants came from the three neighboring colleges.
These students were more likely to be coffee drinkers to help keep them alert. When they moved on to their careers, the nature of their educations lent itself towards desk jobs or careers requiring minimal labor. Again, desk jobs tend go hand in hand with coffee consumption and so usage raised over the years. As adulthood pressed on and their desk jobs wore on, they became more and more sedentary. The quick pace of office lunches tends to make convenience foods easier to obtain and consume in the allotted lunch hour. After ten years of sitting on their butts, drinking coffee and eating burgers for lunch (generally speaking) there HAPPENED to be a rise in obesity that appeared to be correlated to coffee consumption. The study did not ask for work habits, eating habits, exercise habits, stress levels, exposure to computer/janitorial/copying/office or laboratory related chemicals or anything else that might contribute to obesity. In addition, there were a number of unthought of variables that also could have contributed to obesity such as genetics, familial propensity towards sedentary career paths, IQ and anything else RELEVANT, even microscopically so, that might alter one’s health and in turn their weight.
In the end, the blatant refusal to even attempt to incorporate deeper fields of data and therefore meaning into the study, renders the study absolutely worthless. It would be akin to reading the first two pages of War and Peace and then writing a full report on what little you had gleaned.
3D thinking in practice.
Let’s say that a new study on obesity was started but this time a more cohesive amount of data was collected. Vast questionnaires, coupled with environmental testing, physical exams, blood tests, and more would be administered on at least an annual basis for the LIFE of a participant. Social mores, family life and pop culture would be taken into account. Genetics, IQ, neuro-imaging, psych evals, work and home studies, as well as any evolving data needs would all ultimately be tracked and plotted.
Over the life of the study, data points could be plotted giving an evolving view of the whole and provide an ever increasing insight into the nature of obesity. Only at the end of the study though, when all participants have died, would the actual truth of the data come to light. This intricate web of interrelated facts and figures could provide glimpses into not only the nature of obesity but would also render information not originally thought to be acquired. Hotspots of crossing data would appear, pointing the way to either new knowledge or new fields of inquiry.
That, and only something as BASIC as that, would qualify as a study utilizing three dimensional concepts.
The practicality of 3D data collection
Some may argue that a study, such as the one described above, is an impracticality. I say to that, anything less is an exercise in futility, save for areas of direct cause and effect (and even that could be enhanced by 3D thinking concepts) and ultimately would be providing a disservice to mankind. For anything less than that, as demonstrated, not only provides simplistic data, it provides data that is deceptively inaccurate. To truly study an even minimally complex piece of information and actually glean some truth from the results, rules and practices governing best practices must be developed and implemented.
Backwards acquisition of data
We are not forced to wait for new, more strictly governed fields of study to emerge though before this concept can be put to beneficial practice. Simply parsing and re-examining pre-existing data then cross referencing into a 3D model, one can begin to glean new information. In addition, putting our previously acquired knowledge through this process would serve to winnow and vet the irrelevant and inaccurate from the whole of the human knowledge base.
For something like this, we will need to rely on computer programs of a much higher order than what is currently available. Human data collection and review would be far too time consuming and rife for error or prejudice.
————————————————————————
T.A.B. Copyright 2014