Sunday, August 26, 2012

Week 8 - Technology's promise or is it a threat instead, a universal transconsciousness?

The Future of Consciousness

TechCast in a recent article, Halal (2012), reports on the consensus (through a survey) of futurists on the significance (rank importance) of certain technologies in the impending development of automating and redefining human consciousness and their likely arrival (in a future year). This topic continues in the tradition of my theme for this course-transhumanity and the transformation of human-machine species and thought. In particular, Halal conducts a survey among futurists on topics of the technologies of consciousness (ToCs), as he phrases it. These technologies include respectively: (1) AI or general AI (GAI) in my view, (2) biofeedback (mental control of body functions), (3) sex technology (robotics, virtual sex, etc.), (4) collaborative enterprise (stakeholder collaboration), (5) mild drugs (Marijuana, Aderall, and I would also include Modafinil and other mild psychedelics), (6) neurotechnology (enhanced brain functioning), (7) global moral code (synthesis of major religions), (8) thought power (brain-machine interfacing), and (9) virtual reality. These are overly broad topics (and strokes) in technologies that could, in varying ways, affect how we understand human and artificial consciousness and thought. They certainly do not pinpoint how consciousness could be reframed, broadened, or even effectively emulated, but they may contribute in cumulative and synergistic ways towards an epic knowledge of human consciousness.

Some of these technologies are more science-driven than others, (i.e., biofeedback remains sketchy and non-universal). However, in the framework for human consciousness-possibly the grandest project of humanity-classical logic is likely insufficient. Hence, currently accepted traditions of scientific research methodologies will need to be broadened or even revolutionized to cover legitimate post-modernist approaches from non-classical logics such as paraconsistent, fuzzy, and quantum logics and inference systems. These approaches may be needed to transition to the understanding and modeling of strongly emergent systems-precisely the frameworks needed to further enhance our knowledge of neuro-systems, more precise and succinct quantum-gravity cosmological theories, and complexity sciences. Emergence cannot be just observed after the fact. It must be reliably approximated as phenomena within phenomena in order to understand the foundations of complexity dynamics.

Image of Steps in Each Stage of Inquiry (53K)
Trans-consciousness of the universe (Wadhawan, 2009)

The manifested changes in our economy, ethics, socio-technical structure, and science philosophy cannot be overstated by such a revolution in emergent science methodologies and logics. Paradigm shifts in our understanding of consciousness may transcend to more universal definitions of a system consciousness-expanding to a view of a thinking computing universe that self-reproduces as in cyclical models of cosmology (Gurzadyan & Penrose, 2010; Penrose, 2006; Steinhardt, & Turok, 2001). All consciousness may be merely a matter of adaptive, holographically representational, and evolutionarily computational information (Bohm, 2002; Deacon, 2011; Goswani, 1995; Koch, 2012; Penrose & Hameroff, 2011; Susskind,  1995). If this thesis develops into a working definition of consciousness-a new zeitgeist-all technologies dependent on computation (which ones are not?) can be reduced to the simple manipulation of space-time topology and the information-consciousness duality may replace that of energy-matter in the universe (Laszlo, 2009; Stapp, 2009; Vedral, 2010; Deutsch, 2011). Pure thought would then become the only universal currency of value. In a sense, the current era of transparent digital knowledge, is a harbinger of the future development of a universal trans-consciousness. The notion of this trans-consciousness must be arrived at through falsifiable physical-logico systems of reasoning and not from spiritual belief systems (Wadhawan,2009).

References

Bohm, D. (2002). Wholeness and the implicate order. New York, NY: Routledge.

Deacon, T. W. (2011). Incomplete nature: How mind emerged from matter. New York, NY: Norton.

Deutsch, D. (2011). The beginning of infinity: Explanations that transform the world. New York, NY: Penguin Books.

Goswani, A. (1995). The self-aware universe: How consciousness creates the material world. New York, NY: Tarcher/Putnam.

Gurzadyan, V. G., & Penrose, R. (2010). More on the low variance circles in CMB sky. Retrieved from http://arxiv.org/ftp/arxiv/papers/1012/1012.1486.pdf.

Halal, W. E. (2012). Results on consciousness. Retrieved from http://www.techcast.org/featuredarticledetails.aspx?id=260.

Holland, J. H. (1998). Emergence: From chaos to order. Reading, MA: Helix Books.

Laszlo, E. (2009). The akashic experience: Science and the cosmic memory field. Rochester, Vermont: Inner Traditions.

Koch, C. (2012). Consciousness: Confessions of a romantic reductionist. Cambridge, MA: MIT Press.

Penrose, R. (2006). Before the big bang: An outrageous new perspective and its implications for particle physics. Proceedings of EPAC.

Penrose, R., & Hameroff, S. (2011). Consciousness in the universe: Neuroscience, quantum space-time geometry and orch OR theory. Journal of Cosmology, 14, 1-36.

Stapp, H. P. (2009). Mind, matter, and quantum mechanics. 3rd edition. Berlin: Springer.

Steinhardt, P. J., & Turok, N. (2001). A cyclic model of the universe. Science, 296, 5572, 1436-1439.

Susskind, L. (1995). The World as a Hologram. Journal of Mathematical Physics 36 (11): 6377–6396

Vedral, V. (2010). Decoding reality: The universe as quantum information. Oxford, England: Oxford University Press.

Wadhawan,V.(2009). Biocentrism demystified: A response to Deepak Chopra and Robert Lanza's notion of a conscious universe. Retrieved from http://nirmukta.com/2009/12/14/biocentrism-demystified-a-response-to-deepak-chopra-and-robert-lanzas-notion-of-a-conscious-universe/.

Monday, August 13, 2012

Robo-Transhumanity, overcoming the ultimate revolution with the mathematical human-universe

This production features a sequential scenario in which intelligent machines are created via von Neumann, Godel, quantum, universe hypercomputation, and universal Turing constructs, ultimately leading to a pan-superior, pseudo-deity species of robot, the ensuing transhumanity of both humans and machines, the subsequent class and identity struggles, and the ultimate pure mathematical human-universe creating a shared cyclical trans-human-universe mathematical consciousness that usurps it.



Alfredo Sepulveda

Sunday, August 12, 2012

CS855 - Week 6 - The Machine Gods

In the Sept/Oct 2012 issue of the Futurist, a long time author of IT and future shock predictions, Marc Blasband, published an article extending a future prediction about an age old worry we have had since the first notion of automation was posited-future machine worlds and the demise of humankind. In this same issue, Julio Arbesu writes on a future delineation of humans and of transhumanity, a concept I have been mentioning in and out of various discussions in this course. I would like to combine these two visions for the far future. Both predictions were made for the next century (post 2100). This time however, Blasband paints a picture of a cyclical type of co-development between our future machine brethren and humans. Arbesu predicts classes of humans that are further divided into classifications based on what percentage they are comprised of machine parts. Arbesu's taxonomy of future humanoids consists of (1) pure machines, (2) humans with direct connectivity to brain machines, (3) humans with no direct connectivity with brains but with automata in various parts of their body, and (4) just plain old pure humans. Blasband points to the impending wars between rebellious machines (machine-generated guerrilla warfare tactics) from multiple nations, (i.e., every nation's robots and machines will organize to form a machine coalition nation and wage war on humanity to prevent machines from further sacrifice as guinea pigs for human experiments and dangerous adventures and projects). Knowledge will be the new supreme power of control, not governmental force. In this vein, machines will be superior.

Image of Steps in Each Stage of Inquiry (53K)
Transhumanity and machines (chilloutpoint.com, 2010)
Transhuman divides are manifested quite similarly to the divides between gender, race, and ethnic origin of today. Machines gain knowledge (and hence power) at super exponential rates, not unlike the progress AI has made for the past two decades (especially after the logjam that was artificially created by Minsky in the 1960s about the logic gate limitations of perceptrons, i.e., XORs). Intermediate parts of both these long term prognostications (century) have already been approached, if not in technological manifestations, in theoretically successful thought experiments. What has made these proto-predictions become convincing are the forces of technological curiosity and of the social malaise and fetish with slavery. Societies clamor for the new slave (machines) in the guise of a tool that will facilitate a new, easier life for us all (or at least those that think they are to be the aristocratic few left).

The curious scientific mind, akin to the feeling tentacles of the octopus trying to find nourishment or a superior position, is really trying to find the technological nutrients to keep one step ahead of the restless brain extension of itself. We are all but the skin that provides our only feedback (I use skin here as a metaphor for all sense organs) to our scrap full of neuronally entangled brain hell, clamoring to escape, to be extended past our existential limitations. For this to happen, we invent, create, and destroy via those same devises both good and evil (different sides of the same coin).

The two prognostications noted above are also entangled in our progenic yearnings to extend past our death. So, immortality has a newly defined face - technological curiosity (inventions to escape our bodies) and slave surrogates (places to put our new sequence of bodies into). What if we do not engage in this neo-social Darwinist game of flipping physical existences. Instead, what if we redefine our immortality by our perceptual space-times, abandoning physicality all together. What if instead, we deconstruct our information barriers, create artificial information makers, and hence, create anything of any sort, including physical manifestations of alternate universes (generalize many-worlds universes from the same-named interpretation of quantum mechanics to perceptual information universes). Instead, we define death as the never reaching Zeno distance (computed from Zeno machines) Potgieter (2006), between what we are thinking now (which is continuously connected to the next thing we are thinking of anyway - there is no such thing as living in the present-our neuronal structure does not permit that-sorry transcendentalists) and a made up end point of thinking- Western religious definitions of death, (re)incarnations, and transformation. I have just defined a software program for the inscription of digital information immortality. We then embed this as part of the von Neumann self-replicating machine progeny and further mutate it with Godelian self-writing logic to expand itself automatically-defining a superior type of self-reflective and progressive trans-consciousness (von Neumann, 1966; Schmidhuber, 2006). Will machines accomplish this before humans will for them and us? That is the more relevant and fundamental prognostication for human-machine transmogrification.

References

Arbesu, J. (2012). Transport and transhumans. Futurist, 46, 5.

Blasband, M. (2012). When the machines take over. Futurist, 46,5.

chilloutput.com (2010). Retrieved from http://www.chilloutpoint.com/featured/human-and-robots-visions-of-the-future.html.

Potgieter, P. H. (2006). Zeno machines and hypercomputation.arXiv:cs/0412022v3 [cs.CC].

Schmidhuber, J. (2006). Godel machines: Self-referential universal problem solvers making provably optimal self-improvements. arXiv:cs/0309048v5 [cs.LO].

von Neumann, J. (1966). Burks, A. W. (ed.). Theory of self-reproducing automata. Champaign, IL: University of Illinois Press.

Saturday, August 11, 2012

CS855 - Week 5 Agora-like Democracy of Group-thought or Western Propaganda of Ephemeral Fairness

The somewhat vague notion of a technically defined neo-democracy (technology of democracy) briefly outlined in the New Agora article in Schriebman & Christakis (2008) and their earlier works, posits that a more effective group analysis of a complex problem can be rendered by a more democratic notion of collaboration, interchange, and minimalized group influence on individuals, otherwise known in the systems integration vernacular as spreadthink (via ineffective and overly divergent strong individual and small subgroup memetic forces) and groupthink (via a pseudo convergence of compromises forced by coercive majorities or super majorities) (Warfield & Teigen, 1993; Warfield, 1995). The erroneous priority effect (EPE) was mentioned as a major force in the fallacy of industrial group decision-making. In the EPE, narrow-minded philosophies (localisms, nationalisms, etc.) exert an overwhelming (biased) effect on decision-makers. The New Agora approach based on concepts of the Structured Design Dialogue Process (SDDP), is a decidedly integrative approach to systems thinking as pioneered by many early systems thinkers, including Warfield during the second half of the 20-th century as a result of post-positivist and post-modernist methodological philosophies. These two notions or forces of influences on thinking within groups are at nearly opposite ends of the spectrum of group influence reasoning. While the reasoning behind the two prior discussed methodologies of group analysis, the NGT and DM-type collaborative processes are based on some (albeit pre-mature and incomplete scientific proof) experiential, qualitative, and minimal quantitative studies, the democratization of group analysis via the New Agora approach, specifically through the use of the SDDP, seems ad hoc at best. The SDDP is a dialogue clarifying (inquiry) process ladder that is comprised of 10 sub-stages (Schriebman, & Christakis, 2008). The SDDP consists of seven construct category modules which in turn contain other sub-construct components totally 31 overall as following: (1) 6 consensus methodologies, (2) 7 language patterns, (3) 3 application time phases, (4) 3 key role responsibilities, (5) four stages of interactive inquiry, (6) collaborative software and facility, and (7) 6 dialogue laws. Using this architecture, a sequence of 10 inquiry steps can be ordered as in the diagram below:

Image of Steps in Each Stage of Inquiry (53K)
The SDDP Process Stages
The SDDP architecture is then utilized as a toolkit for executing the 10 inquiry steps. The assumption in this process is that at each stage of enlightenment of a starting complex problem, deconstruction is manifested, followed by effective analysis and solution consensus. What was not emphasized in the article is that in real group problem solving, this sequence may be non-linear, chaotic, and unordered. It may also be partially overlapping and interlaced with punctuated progression and digression. Clear problem-solving does not go in lock-step in a consistent direction (Aha moments are mostly unpredictable and many times acausal). How can a convergence of solution be be proven? The emphasis in the SDDP is on the convergence to a more meaningful and effective problem solution based on gathering consensus and understanding of mutual associations of ideas. The ultimate Follettian dictum of the effective "whole group" convergence of ideas based on modes of association as opposed to representation weighting is highlighted in the SDDP subprocess inquiries. Each of the subprocesses in the SDDP has merit in the philosophical and logical interpretation of interactions and ideas of humans. For example, abductive reasoning as posited by Peirce, combines interation of deductive and inductive reasoning and logic decision-making. However, it seems that representational bias can still creep into a process such as the vote and rank subprocess. One subprocess may amplify or dampen another subprocess or sequence of such. This may seem to produce an equivocation of ideas. However, it is not guaranteed a solution or convergence. All the SDDP processes are more qualitative than quantitative.Are there published or well thought out studies and results depicting the effectiveness or improvement of operations utilizing the SDDP process or New Agora-like democratization of negotiated consensus analysis? Effective reasoning, democratic or not, does not equate to effective scientific, empirical, or causal effectiveness or reality (precision of prediction or generalization).

The philosophical approach used by the authors of the New Agora, based on the early management systems thinking of Follett in Graham (1996), includes the theoretical democratization of how ideas from individuals involved in a group approach to solving or understanding a complex issue or problem, can be harmonized into an optimized consensus statement describing a proposed solution or clarification to that problem. There are many assumptions implicitly made in such a proposition or thesis. Inherent in this endeavor is the proposed development of a geometry of languaging as the tool for transmission of collaboration dialogue leading to the author's notion of a technology of democracy. Geometry instills the notions of distance, proportion, and angular directions. There were no notions of distance (more generally divergence measures), proportionality or angular direction in any of these so-called technologies of democracy. Is this a case of a facetious mathematization of a social notion of relationship or just a poetic systems license to generalize? I think not because systems can be defined as succinct mathematical objects using Category Theory, a branch of meta-mathematics. In fact, a systems category can be a generalized template for any physical and/or mental framework (Doering & Isham, 2007; Doering & Barbosa, 2011).

Mary Parker Follett
Mary Parker Follett
The largest of such assumptions is that of the existence of the convergence of ideas, in general, for random or somewhat unknown characteristics of individual in a group. The SDDP process includes sub-processes that endeavor to unpeel the hidden agendas of individuals in order to come to a more comprehensive approach to optimally efficient, but democratic consensus. Democratic results are not a causal endpoint to democratic processes. The panoramic question is: how does a process (if at all) produce a nearly whole solution acceptable to most if not all subgroups of the decision-making or influencing population? I believe this problem can be solved in many cases using massive statistical computation via tools such as Monte Carlo modeling of human reasoning and affective emotional decision-making. All human decision-making is based on micro-emotions due to the construction of our neural structure. The homunculus effect of embedded spirit or mind (there is something inside our souls and minds separate from our computing brain) versus the universal computational neural processor argument seems to point to the controversy of the individual-group dichotomy of decision-making and its post-decision interpretation (was it a truly fair and democratic decision?). Group decision-making cannot be interpreted in individual terms because of the psycho-physical (spacio-temporial perceptual differences between two separable observers) separation of senses.

The endeavors to come to group decision-making consensus via the SDDP or any other individual gathering inquires are doomed to failure because of the continuum of separation. In this regard though, subconvergence (suboptimality) of ideas may happen (its better than wide open disagreement). In this respect, the SDDP can be used to home in on consensus interpretations for my proposed innovation of human-machine entities in the future. No one will agree on what separates a human and a well enough conceived humanoid machine in the future. This will require a consensus building process, aka, an SDDP-like approach to pseudo-equivocal convergence of ideas. This consensus process will have social, psychological, and technical implications for nearly every aspect of human living because it defines a new techno-socio-economic structure for product development, knowledge, and human motivation for advancement into the next stage of trans-human existence-the hyper-existence of all living beings.

I want to briefly close with an historic critique of the use of the Agora analogy for democracy. Democracy did not start in Agora nor was it pure. It percolated through the suburban intellectual revolutions happening around Greece, more specifically, Athens more than three hundred years BC. Agora was a metropolis phenomena 4 centuries after proto-Christianity started from its intellectual roots in the Gnostic tradition. Proto-Christianity broke down the Gnostic because of their love for reasoning. Agora consisted of various classes of citizens including slaves. Democracy did not exist for them. Agora was more of a science-based summit for the world at the time. Therefore, it tried to minimize the influences of all superstitious religious zealots, including the Roman and Christian mythologies of the time. Both of these cultures slowly destroyed this summit of knowledge through emotional reasoning of survival of religious memes. So, maybe the democratization of group decision-making should be based on proto-Greco democracy, with all its warts (they had fewer slaves, but nonetheless, had class separation). Everything, it seems, including democracy, has a relativistic interpretation by humans to themselves and their neighbors. That relativism may also be the largest reason for the failure of any equivocation of ideas in anu society.

Finally, the very act of linearizing the process of progress and group decision-making with the introduction of a matrix-like approach to stages of inquires of a complexity seems to fly in the face of evolutionary development itself. It seems to indicate that the new technology of democracy approach of the authors is really a controlled attempt devoid of evolutionary processes, (i.e., it is dictated by other processes in a contrived manner). The real new technology of democracy might be better suited or claimed by the phenomenal metamorphosis of the web to the semantic web and then to s future version of a hyper-intelligent web (a paper I am writing presently).

References

Doering, A., & Isham, C. J. (2007). A topos foundation for theories of physics: Formal language for physics. Retrieved from http://arxiv.org/abs/quant-ph/0703060

Doering, A., & Barbosa, R. S. (2011). Unsharp values, domains and topoi. arXiv:1107.1083v1 [quant-ph].

Graham, P. (2003).(ed.). Mary Parker Follett: Prophet of management. Beard Book, Inc.

Schriebman, V., & Christakis, A. N. (2008). New agora: New geometry of languaging and new technology of democracy. Updated version 2008. Journal of Applied Systemic Studies, 1, 1, 15-31.

Warfield, J. N. (1995). Spreadthink: Explaining ineffective groups. Systems Research and Behavioral Science, 12, 1, 5–14.

Warfield, J. N., & Teigen, C. (1993). Groupthink, clanthink, spreadthink, and linkthink: Decision-Making on Complex Issues in Organizations 4-5, 31, Institute for Advanced Study of the Integrative Sciences, George Mason University.

Wednesday, August 1, 2012

CS855 - Week 4 - Group Decision Making - It's a Game Strategy

Two popular group decision making methodologies are investigated this week, the Delphi Method (DM) as formally developed in Dalkey & Helmer (1963), and its variants and hybrids noted in particular domains such as a research methodology in Skulmoski, Hartman, & Krahn (2007), and the Nominal Group Technique (NGT) and its variants, most notably a simplified version in Dobbie, Rhodes, Tysinger, Freeman (2004) for educational processes of the modified version given by MNGT as devised in Bartunek & Murnighan (1984). Both methodologies involve group decision-making by quasi-consensus through an iterative process of refinement and weighing. The final product, in both cases is a consensus on how to solve a problem with a target solution, (i.e., an optimization, prognostication, strategy, constructive suggestions, etc.). The differences lie in their starting inputs (interpreters and knowledge repository), group interaction dynamics, and iterative reassessment techniques. The DM starts by using so-called experts in relevant fields of study for the problem to be solved. These are usually small in number compared to the larger number of participant-like interpreters in the NGT. In both cases, convergence to a consensus is assumed by their respective nature of interaction. In the delphi method, for example, expert participants are given several rounds of discussion and contribution to the problem at hand. They are essentially giving their estimates of solutions or answers to these project problems or questions. There is an implicit motive to hybridize and combine solution types or estimates during these rounds until convergence appears to happen in later rounds.

In the NGT, input comes in the shape of general estimates from the process participants. They are not subject to scrutiny and this theoretically invites more openness and less mob influence on individual choices. Intermediate "moments of silence" are also introduced as a means to give participants the opportunity to collect their respective individual thoughts and creativity.  Also in the NGT, successively smaller groups are formed in order to drill down into sub-estimates or answers in order to compare to the original target problem. Ill-structured problems may be broken down by this iteration of forming smaller clusters of problem-solving subgroups until the original problem can be clarified and hence attacked more effectively. In the minor version of MNGT, smaller amounts of time are given at the conclusion in order to focus groups on the termination of the project and pinpoint consensus answers. Any differences in direction between the sub-group estimates and the original problem target are reconciled and attempted to be brought back to relevance. In the delphi method, in each round or iteration, the expert's individual estimates are weighted and group rank ordered. In the NGT, rank ordering also occurs but at the smaller group scenario before being projected back to the larger original problem. Convergence rates differ according to (1) the dynamics of the groups involved in using both methodologies, (2) the magnitude of the original target problem and (3) of the quality of input information given to the interpreters, (i.e., the veracity of knowledge input). Knowledge is usually incomplete, vague, directionally unknown, convoluted, or biased when using group decision processes. Another very large assumption made when using these methodologies is that of "wisdom of the crowds" effects, Surowiecki (2005), network computational growth, and the robustness of statistical smearing estimates (i.e., variants of averaging). If anything, these assumptions introduce more ambiguity and pseudo-randomness into the process by the very nature of the structures that are required to be present when using such conditions.

Theoretically, group decision making involves the dynamics of coalitions, exogeneous effects, and infinite interactions, interchanges, and conditional probabilities of success of the participant groups. In Dalkey (2002) the DM is given a more succinct mathematical statistical face. Dalkey points out the enormous uncertainty from many angles in the usage of the DM dynamics when forming a mathematical statistical structure for group estimation methods. In Lee & Shi (2011), the dynamics of weighted group estimation improves on the straight averaging of opinions, but the causal effects are unknown - wisdom of the crowds also suffers from ignorance of the crowds when left unchecked or when weighing schemes are biased based on the very human frailties of pattern-opia, anchoring, and other well known and studied judgment limitations (Tversky & Kahneman, 1974). Parente & Anderson-Parente (2011) point to the need to choose a more diverse, representative, and competent expert panel for a DM when possible, thereby cutting down on possible technological bias. Additionally, they posit that mere majority rules are not adequate for accuracy. There should be well-structured statistical criteria for robust estimators of accuracy in the final outcome results of the panels involved. Time-based solutions should also be given, (i.e., not just the scenarios that are likely to happen, but the time frame for them to occur) in order to present more useful prognostication. This is what I have called in the past, the Nostradamus scam (effect) in which scenarios are predicted based on vague generalizations without time frames that can conveniently be fitted retroactively to events for self-approval.

I propose that group estimation and decision-making is much more dynamic than management science or operations research would purport it to be. In fact, group estimation and decision-making is a game-theoretic program involving continual iterative Bayesian conditioning, coalition sub-strategies with an optimization of profit which can be interpreted as measurement of accuracy or realism of the participant's prowess to give plausible estimates. Individual and group utility functions (in game theory) represent individual and group participatory realism in group decision-making. It is akin to fitness-based multi-criteriia decision-making with multiple players, (i.e., a multi-criteria evolutionary game structure). How does one measure the effectiveness of a-priori probabilities on the success of experts or particpants over a long period of time and account for that in the group decision process? Do you weigh each participant differently based on this measure of "realism"? If you did then this would introduce certain other biases because who would measure the measurer? As it stands now, both these methodologies do not explicitly account for this realism of the participants. Experts and non-experts alike have agendas and they are unconsciously not optimal for the group. Hence, there are sub-coalitions that form in order to reach pseudo-consensus results using these group methods. There are always elements of coercion when trying to reach consensus in non-agreeing groups. Futuring is no exception and in fact, could lead to more subversive and insidious coercion because of self-fulfilling prophecies and wisdom-of-the-crowds blind memetic manipulation for future prospecting. Wisdom of the crowds effects may led to more optimal decisions, but only if the size, diversity, and independence of deciding panels are sufficient (Lee & Shi, 2011). Even then, the group dynamic may be, at best, sub-optimal, stopping at local bests while not having the long-term wisdom to pursue out-of-the-box approaches and technologies. Elements of social choice theory are also involved when these sub-coalitions are formed in Delphi-type groups or star chambers. Social evolutionary dynamics are probably more relevant than naive statistical smearing of estimates and opinions in group processes. The MNGT methods suffer from similar statistical weaknesses and participant realism. Also, since these participants are usually part of a larger experiment, they become "the experiment" and hence are inextricably bias-tied to the measurement process.

With respect to various ambitious futuring prospects that I have evangelized in the course, in particular, the introduction of generalized non-classical computational models to build hybrid human-machines, either methodology would be insufficient on its own. There are various reasons for this shortfall. One is the relative shortage of knowledge domain experts in all the multi-disciplinary fields that would be involved. Another would be the insufficient literature to be used as the starting input for them. If each method could be modified such that group estimation is given more statistical and dynamic robustness and realism (accuracy measurement) of the participants, the prospect for garnering a successful iteratively formed idea for human-machine hybrids as living support for our frailties, would be more plausible. Iterative refinement would probably be beneficial in one respect here. Brainstorming keep aside, group estimation methods are good for refining starting points in a development process. Plausible non-coerced convergence after that may not be so clear cut, academic, or even exist.

I want to add that variance among each participant's estimates are presumed to decrease as further iterations proceed based on convergence of consensus. Variability may be falsely measured based on coercion of influences, even in the NGT methodologies because eventually each individual is faced with subgroup consensus building and hence with harmonization of ideas. They may nit "feel" as if they are coerced, however, they are pressured to come to consensus toward the end of the process. This may be a false ending to the process of consensus. In the end consensus building may also not be optimal. This depends on the nature of the problem and an acceptable scenario for the solution. space.

In doing further research into group decision-making methodologies, I came upon a very recent article or more exactly, a letter to the editor of Science Magazine in Mercier & Sperber (2012) in which they separate two most probable scenarios and their experimental dynamics in wisdom of crowds effects. The first concerns group decision processes that depend mostly on the perception and memorization of the participant panel. If the motive for judging each other is based on perception from things like self-assessments or the perception of self-worth and how each propagates that to everyone else (boosting, reputation, etc), without regards to actual effectiveness, then consensus is usually reached by way of appointing the leadership to the perceived leader. No communication between the participants becomes relevant in this case. In other words, the one with the highest reputation or perception of highest excellence usually gets their respective views weighed much more heavily in the final consensus reached, regardless of group methodology. The second scenario is when exchange of arguments is used for the primary purpose of rooting out the best choices, (i.e., reasonable debate and ranking). In this situation, convergence of ideas is optimized when knowledgeable exchanges are made. Even minority consensus can win out based on effective debating and reasoning. In the end, by using effective communication and exchange of ideas using optimal sizes of panels, wisdom of crowds does better on the average than oligarchies or aristocracies of idea makers. It may do worse if communication does not exist, is minimized based on the perception of domination within the group or if majority mob mentality takes over the optimal core of reasoning of the group.

In a futurology panel using a DM or NGT type methodology, interesting things may occur based on reputation alone. Since futurologists have extremely poor records for prediction, perceptions are all they have. However, if everyone is approximately equally likely to be wrong then the perception is that no one should be a leader. Then consensus should be entirely based on the effective and reasonable exchange of ideas and reasoning based on scientific first principles of physical matter and broad sociological phenomena. One futurologist that was mentioned by Cynthia in this week's class (in the SL session) was Herman Kahn, he of the cold war thermonuclear and Dr. Strangelove fame. Kahn dropped out of graduate school, was hired by the Rand Corporation (mostly because he shared the extreme right-wing philosophy of its board), and gave us the ideas of gaming out wars that got us into so much trouble in the cold war and Vietnam. Robert McNamara (Kennedy and Johnson's secretary of defense) was a huge fan of his and because of him, he helped escalate the Vietnam war without end as well as Henry Kissinger. Kahn was wrong around 90% of the time according to Sherden (1998). Kahn had been a very dominant type of character and personality in meetings with the Pentagon and it is very clear that in any panel he would have ruled by perception and not by the effective exchange of ideas and a debate. Kahn would later change his mind about how to approach wars, especially nuclear ones. I think it was a bit too late and it cost a tremendous loss of lives in the world and the foundation for our military industrial complex that has put us in so much economic trouble now. McNamara would later admit his and Kahn's mistaken philosophies of war and of predicting the future.


References

Bartunek, J.M., & Murnighan, J.K. (1984). "The nominal group technique: Expanding the basic procedure and underlying assumptions", Group and Organization Studies, 9, 417-432.

Dalkey, N. C., & Helmer, O. (1963). An experimental application of the Delphi Method to the use of experts. Management Science, 9, 3, 458-468.

Dalkey, N. C. (2002). Toward a theory of group estimation. In Linstone, H. & Taroff, M. (Eds.). The delphi method: Techniques and applications. 236-261. London, England: Addison-Wesley.

Dobbie, A., Rhodes, M., Tysinger, J. W., Freeman, J. (2004). Using a modified nominal group technique as a curriculum evaluation.

Lee, M. D., & Shi, J. (2011). The accuracy of small-group estimation and the wisdom of crowds.


Mercier, H., & Sperber, D. (2012). Two heads are better stands to reason. Science, 336, p. 979.


Parente, R., & Anderson-Parente, J. (2011). A case study of long-term delphi accuracy. Technological Forecasting & Social Change.

Sherden, W. A. (1998). The fortune sellers. New York, NY: Wiley.

Skulmoski, G. J., Hartman, F. T., & Krahn, J. (2007). The delphi method for graduate research.

Surowiecki, J. (2005). The wisdom of crowds. New York, NY: Anchor Press.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1457, 1124-1171.

Monday, July 23, 2012

CS855 - Week 3 - Gaming Reality and the Universe?

The TED presentation originally given by Jane McGonigal, McGonigal (2012) in February of 2010 and again during the June 2012 TED conference and based on her book McGonigal (2011), was exceptionally fresh in many ways. Firstly, the consensus of non-gamers is that gaming is seen as a sort of passive activity removed from the pressures of reality. It is correlated with inactive research and unrestrained behavior. However, as McGonigal explains, gaming has introduced a new kind of collaborative and interactive research methodology. Gaming also gives groups (and individuals) the opportunity to express and experience co-opetition - a game-theoretic notion of cooperative survival as seen sometimes through the lens of the very famous Prisoner's Dilemma game and its variants, including the Tragedy of the Commons in which groups do not participate in a common good if they think they can garner the benefits of the group's work without they themselves being participatory. These are game-theoretic strategies that have been around for 5 decades now, since the notion of the Cold War end to the world was envisaged by both US and USSR scientists post World War II.

While McGonigal does not directly mention these strategies in her book and talk, her intuition and research premise that gaming leads to cooperative and competitive regimes simultaneously while having groups solve problems in the collective is singularly unique when she takes that notion to the next level - crafting a different way of attacking reality and real problems. Gaming is potentially a new scientific methodology - a distinctive point in the spectrum of collective intelligence approaches on the semantic web. Gaming has most famously led to solving some topological protein shaping problems in biology. Gaming may lead to solving some very sticky problems in computational dynamics as well. Generally speaking, McGonigal points to some very positive psychologically inspiring phenomena in gamers when they try to solve problems, a sort of collective addiction to getting to the problem at hand and solving it as the collective. This leads to similar, if not exaggerated features that are shared by the intense work done by researchers when they know what is at stake and are close to the finish line, case in point, CERN and FermiLab's quests to statistically validate the Higgs Field and hence, the Higgs boson and its potential artifacts.

This methodology can be carried into the realm of future technological prowess because it points to a new way of approaching future problem solving - a mixture of computational and human collective power usurping that of the traditional disciplinary methodologies of qualitative and quantitative (and their hybrids) research. Futuring may then be seen as simple computational collective experiments.

Finally, to reiterate the forces involved in the theory of gaming interaction and development as posited by McGonigal, the one  I emphasized the most would be the technical aspect of advancement, a new paradigm of research, interacting with others as one does this and of a collective acceleration of it. The other force involved in this theory would be the societal aspect essentially because it's widespread usage would change the fundamental way in which society interacts in work, design, and implementation. This is so because it would set a precedent for safer prototyping, dissemination of ideas connected to a development, and of publication of them.

References

McGonigal, J. (2011). Reality is broken: Why games make us better and how they can change the world. New York: Penguin Press HC.

McGonigal, J. (2012). Gaming can make a better world. TEDGlobal 2012. Retrieved from http://www.ted.com/talks/jane_mcgonigal_gaming_can_make_a_better_world.html.

Sunday, July 22, 2012

CS855 - Week 2 - Internet of Things and Things of the Internet

The NMC Horizon Report for Education 2012 highlights several innovation trends and technologies that are approximately 1 to 5 years from advancing to a phase one maturity with market share that would disrupt current industries and ways of educating (Johnson, Smith, Willis, Levine, Haywood, 2012)). This report, albeit concentrates on those aspects of innovations that focus on educational prowess and creativity of presentation.

I shall focus on the trend (and technologies) of the so called Internet of Things - the gadget world of devices that have autonomous identification, intelligence of presence, and the ability to be connectable to the Internet (initially through the technologies of RFID and NFC), therefore becoming part of the web sphere of objects, sometimes referred to as IoT and its accompanying architecture, (IoT-A, 2012). This trend was marked as being in the category of 4 to 5 years from market introduction. Examples of this species of devices are implantable, smart parts into larger structures that can feed real-time component information over the web to control apps. This information may consist of internal stresses and external feedback from other parts interconnected to them from the mother structure. One example that was not cited is the present ability to implant mechanical devices into materials to detect real-time stress fractures and anisotropic forces. This would forewarn the central system of materials failure beforehand. This is controllable over the cloud (given adequate security protocols). The next phase in the development of the semantic web, along with the aforementioned expanded IPv6 address space is already able to accommodate these devices and more. This is not a 4 to 5 year innovation. They are, at worse, next year's holiday and industry toys. I think what would qualify as the next species of things on the Internet are truly smart semantic objects on a truly smart semantic web. This would mean devices which when plugged into the web will find their "place" in the universe, (i.e., adapt themselves to a region of connective tissue from other similar devices and then build itself into a new super-structure in order to devise an entirely new "thing" on the Internet). This is the new biology of the hyper-intelligent web. Why doesn't the report capture any new creativity instead of relying on familiar pseudo-futurological gimmicks that can be gleamed from the NY Times or the Wall Street Journal?

Hyper-intelligent agent Internet things

The major methodology of the Horizon Report board in choosing these trends and technologies is a hybrid delphi method of partially renewed group-think that are given certain information on prominent technologies. Some of the board members are from schools that are somewhat local to one of the places that I reside. They are more akin to community colleges and 4-year institutions. I was a bit disappointed by such a selection on their panel. Also, this board seems to select innovative technologies (and trends) based solely on their effect on educational methodologies. What was missing were research and technology gurus and actual innovators, past and present. With that proviso, then one must conclude that such choices for future trends are wholeheartedly educational in focus. In particular, as far as IoT was concerned, no mention of security, privacy, and scalability issues was made. These are major barriers to the deployment of IoT and of a viable IoT-A (IoT-A, 2012). In general, I am not sure this is the best approach to listing future trends of any kind. Educators are oftentimes limited by what they see as cerebral and neurological barriers in penetrating another human brain. They seem to try the same thing over and over again until the student disappears from their classroom. Additionally, who chooses the original materials that the board chooses from? Are these choices biased based on this selected hidden star chamber (the ones that pick the initial offering materials to the board), if you will? The 1 to 2 year innovative trends (and technologies) seem to be already maturing, (i.e., the group's picks are based on phase 2 innovations not pre-innovations).

References


IoT-A (2012). The internet of things - architecture. Retrieved from http://www.iot-a.eu/public/front-page.

Johnson, L., Smith, R., Willis, H., Levine, A., & Haywood, K. (2012). The 2012 Horizon Report. Austin, Texas: The New Media Consortium.

Thursday, July 12, 2012

CS855 - Week 1 - of things in different times and patterns


After reviewing some of the first week comments from CS855 posted by CTU class participants and replies from Cynthia and of the first Breeze class session (of which I was regrettably not able to attend while I drove across state highway 83 in South Texas which runs alongside the Rio Grande border, never more than a mile from that border - a sort of wild west frontier still), I am still amazed at the super progress of the hard sciences since the publication dates of the assigned books, in particular, The Fortune Sellers (his play on the phrase, Fortune Tellers). Again, this is not a direct criticism of Sherden (he writes for another period), but I keep on finding disagreement with many of his conclusions, especially his fatalism of the indeterminism of the social sciences, economics, and other fields of discipline that purport to model social phenomena. Things have changed dramatically in a little over one decade. In fact, by adding just a decade to the time periods of many of his criticisms of how scientists "got it wrong" during the 1950s, 60s, and 70s", they happened to have got it right after all. This is not just a coincidence. Because we better understand the technical aspects of complexity and of black swan and long-tail events, social scientists are better at describing phenomena and consequently many of their meta-projections, albeit nothing is perfectly tuned to the future because of quantum fluctuations, etc., and  notwithstanding the calamity of the banking crisis of 2008 and the heartbreak of 9/11, both events of which we could easily have been forewarned of using existing intelligence data and technologies at those time periods. However, Sherden, when he does mention quantum mechanics, uses it in a abusive manner - he doesn't quite understand that it describes microcosms and not so much macro-worlds such as social constructs at our scale. Scalability when used in terms of emergence in adaptive complex structures is another matter all together - it actually has technical meaning. Apparently, evolutionary patterns take place across scale boundaries in complex adaptive structures and this  is when things really take shape, so to speak, at the meso-scale level. I have been evangelizing this for two years now, since my work on computational information evolution started. Secondly, there is such a phenomena model called stochastic chaos and that type of chaotic structure is not deterministic. In fact, it is your worse nightmare in terms of modeling. It does not stop there. There is also quantum chaos as well. Quantum probability and stochastics is part and parcel to that chaotic model. So, chaos is not so clearly defined and even more interesting, hard to discern when trying to find out if a phenomenon is indeed a chaotic process. Time series analysis is usually done as a first step in order to approximate whether something approaches being chaotic.

On economic forecasting, Sherden dismisses entirely chaos as a tool for economics based on brief statements from Brian Arthur and others at the SFI. However the SFI understands the importance of meta-patterns introduced by the idea of chaotic processes as models. In particular, while Mandelbrot's development of a fractal chaos describing some economic processes such as the macro stock market dynamic is not a micro-definition, it nonetheless is instructive in finding economic meta-patterns. Stochastic modeling of prices in the markets is what quants and arbitragers do, especially in applying stochastic processes (usually Ito-type and variants of Black-Scholes models) to those markets and they all failed in predicting the banking and housing crisis of 2008 and other micro movements, but that was not the fault of the mathematics per se. Incomplete information mixed with a larger than expected uncertainty in synergies, plus the mob irrationalities of all of us trying to get things we want instead of things we need contributed to these mistakes. That is essentially the point that Nassim Taleb (he of black swan fame) was trying to say. These sub-phenomena are all capable of being modeled to more accuracy now, as well as knowing the adaptive complexity of their compounded processes, at least in meta-patterning. Sherdan hand waves through all of these very hard scientific methodologies and pronounces them impotent in one wand movement. Not even Taleb (a former math quant) does this.

I propose that futurists, when endeavoring to prognosticate based on projections of current technology, as Sherden correctly asserts, use situation bias to limit themselves - must look beyond their own skins, their world and beyond yet, their dreams. The decade figure I threw out before seems to point to a perceived future lag in these situation biases. Technical futurists, like Michio Kaku recognize these lags and have adjusted some of their prognostications. That is probably a good idea to get projections more in the "ball park" or realm of possibility. I like the proposal that the late great sci-fi writer and chemist Issac Asimov had about developing a social physics that looks at history paths as patterns and not so much as data for predictions. I think that patterns are more important than data or statistical analysis per se. Complexity is really about non-linear dynamics and that dictates that fluctuations (quantum or not) introduce too much uncertainty in the beginning or while a process is alive, to make any sort of precise determination of pinpoint future events. Patterns, on the other hand, are a way of categorizing what could happen and therefore, what risk portfolios we might want to adapt. We should all have our own built-in insurance plans.

Sherden does pay proper homage to Karl Popper's foundational work on complexity, society, and technological advances. But, again, Popper could not have possibly foreseen the progress of science in a century. Einstein thought that the hardest problems are and will continue to be those of individual human movements. Societies are built out of these atoms, but structures, as adaptive as they may be, display patterns and again, that is the key to "seeing through and beyond time". Which brings up my most important point about the future of futurology - time dilation or the redefinition of time will make all this a moot subject because then it is just a matter of running continuous time lab experiments - we are all, again, rodent petri dishes.

On the point of what forces are (those words ending in "al", although mostly anything could be turned into such things), should we not define them more dynamically and precisely, otherwise, they become too general without precise meaning and purpose? I like to look at forces in the physical sense. These are precise quantities when things are measurable. Forces can then be described by fields which are another very useful, ubiquitous, and beautiful abstract construct of mathematical physics (not the mathematical object, although the two could be woven into a single overarching abstract entity). Humans and social groups cause forces by the intermixing and synergism of different actions and these actions could then be quantified to an approximate extent.

CS855 - The beginning, a futuring tool - Futuring a General Stochastic Non-Frege-Aristotelian Hypercomputer Universe


Conventional wisdom (whatever that may be) is that futuring a world, much less a universe, is a fool's paradise extraordinaire. Despite taking into account non-linear dynamics (chaos), stochasticity, and non-Frege-Aristotelian many-valued logics, one is trapped by extrapolational biases and mindsets - they are in our neural fiber. However, also in our wetware, is a propensity to develop scenarios for transformational paradigms - thinking beyond the realm of senses, dreams, and gods. Futuring is our anthropomorphic way of dealing with this cranial catharsis.

My proposed manner in which to perform futuring is through the use of hypercomputation using history paths as inputs and Bayesian stochastic operators as controllers (gates, junctions, and other advanced computational operations). Non-Frege-Aristotelian many-valued (or continuous-valued) logics which subsumes quantum probability, fuzzy logic, belief systems, possibility spaces, and other uncertainty regimes can then play the role of interchangeable (hybrid) logic frameworks for these operators. Information chunks as atoms for the formation of history paths is extended to represent generalized uncertainty bits - gbits. Hypercomputation is visualized as general machines that take into account physical universes in quasi-casual frameworks - causaloids. Abstractions such as quantum, relativistic, quantum-gravity, super-Turing, von-Neumann, Godelian, Zeno, and UTMs (universal Turing machines) machines are instantiations of this computational framework calculus.

A segue to ... CS 855 Socio-technical Futuring: