Monday, July 23, 2012

CS855 - Week 3 - Gaming Reality and the Universe?

The TED presentation originally given by Jane McGonigal, McGonigal (2012) in February of 2010 and again during the June 2012 TED conference and based on her book McGonigal (2011), was exceptionally fresh in many ways. Firstly, the consensus of non-gamers is that gaming is seen as a sort of passive activity removed from the pressures of reality. It is correlated with inactive research and unrestrained behavior. However, as McGonigal explains, gaming has introduced a new kind of collaborative and interactive research methodology. Gaming also gives groups (and individuals) the opportunity to express and experience co-opetition - a game-theoretic notion of cooperative survival as seen sometimes through the lens of the very famous Prisoner's Dilemma game and its variants, including the Tragedy of the Commons in which groups do not participate in a common good if they think they can garner the benefits of the group's work without they themselves being participatory. These are game-theoretic strategies that have been around for 5 decades now, since the notion of the Cold War end to the world was envisaged by both US and USSR scientists post World War II.

While McGonigal does not directly mention these strategies in her book and talk, her intuition and research premise that gaming leads to cooperative and competitive regimes simultaneously while having groups solve problems in the collective is singularly unique when she takes that notion to the next level - crafting a different way of attacking reality and real problems. Gaming is potentially a new scientific methodology - a distinctive point in the spectrum of collective intelligence approaches on the semantic web. Gaming has most famously led to solving some topological protein shaping problems in biology. Gaming may lead to solving some very sticky problems in computational dynamics as well. Generally speaking, McGonigal points to some very positive psychologically inspiring phenomena in gamers when they try to solve problems, a sort of collective addiction to getting to the problem at hand and solving it as the collective. This leads to similar, if not exaggerated features that are shared by the intense work done by researchers when they know what is at stake and are close to the finish line, case in point, CERN and FermiLab's quests to statistically validate the Higgs Field and hence, the Higgs boson and its potential artifacts.

This methodology can be carried into the realm of future technological prowess because it points to a new way of approaching future problem solving - a mixture of computational and human collective power usurping that of the traditional disciplinary methodologies of qualitative and quantitative (and their hybrids) research. Futuring may then be seen as simple computational collective experiments.

Finally, to reiterate the forces involved in the theory of gaming interaction and development as posited by McGonigal, the one  I emphasized the most would be the technical aspect of advancement, a new paradigm of research, interacting with others as one does this and of a collective acceleration of it. The other force involved in this theory would be the societal aspect essentially because it's widespread usage would change the fundamental way in which society interacts in work, design, and implementation. This is so because it would set a precedent for safer prototyping, dissemination of ideas connected to a development, and of publication of them.

References

McGonigal, J. (2011). Reality is broken: Why games make us better and how they can change the world. New York: Penguin Press HC.

McGonigal, J. (2012). Gaming can make a better world. TEDGlobal 2012. Retrieved from http://www.ted.com/talks/jane_mcgonigal_gaming_can_make_a_better_world.html.

Sunday, July 22, 2012

CS855 - Week 2 - Internet of Things and Things of the Internet

The NMC Horizon Report for Education 2012 highlights several innovation trends and technologies that are approximately 1 to 5 years from advancing to a phase one maturity with market share that would disrupt current industries and ways of educating (Johnson, Smith, Willis, Levine, Haywood, 2012)). This report, albeit concentrates on those aspects of innovations that focus on educational prowess and creativity of presentation.

I shall focus on the trend (and technologies) of the so called Internet of Things - the gadget world of devices that have autonomous identification, intelligence of presence, and the ability to be connectable to the Internet (initially through the technologies of RFID and NFC), therefore becoming part of the web sphere of objects, sometimes referred to as IoT and its accompanying architecture, (IoT-A, 2012). This trend was marked as being in the category of 4 to 5 years from market introduction. Examples of this species of devices are implantable, smart parts into larger structures that can feed real-time component information over the web to control apps. This information may consist of internal stresses and external feedback from other parts interconnected to them from the mother structure. One example that was not cited is the present ability to implant mechanical devices into materials to detect real-time stress fractures and anisotropic forces. This would forewarn the central system of materials failure beforehand. This is controllable over the cloud (given adequate security protocols). The next phase in the development of the semantic web, along with the aforementioned expanded IPv6 address space is already able to accommodate these devices and more. This is not a 4 to 5 year innovation. They are, at worse, next year's holiday and industry toys. I think what would qualify as the next species of things on the Internet are truly smart semantic objects on a truly smart semantic web. This would mean devices which when plugged into the web will find their "place" in the universe, (i.e., adapt themselves to a region of connective tissue from other similar devices and then build itself into a new super-structure in order to devise an entirely new "thing" on the Internet). This is the new biology of the hyper-intelligent web. Why doesn't the report capture any new creativity instead of relying on familiar pseudo-futurological gimmicks that can be gleamed from the NY Times or the Wall Street Journal?

Hyper-intelligent agent Internet things

The major methodology of the Horizon Report board in choosing these trends and technologies is a hybrid delphi method of partially renewed group-think that are given certain information on prominent technologies. Some of the board members are from schools that are somewhat local to one of the places that I reside. They are more akin to community colleges and 4-year institutions. I was a bit disappointed by such a selection on their panel. Also, this board seems to select innovative technologies (and trends) based solely on their effect on educational methodologies. What was missing were research and technology gurus and actual innovators, past and present. With that proviso, then one must conclude that such choices for future trends are wholeheartedly educational in focus. In particular, as far as IoT was concerned, no mention of security, privacy, and scalability issues was made. These are major barriers to the deployment of IoT and of a viable IoT-A (IoT-A, 2012). In general, I am not sure this is the best approach to listing future trends of any kind. Educators are oftentimes limited by what they see as cerebral and neurological barriers in penetrating another human brain. They seem to try the same thing over and over again until the student disappears from their classroom. Additionally, who chooses the original materials that the board chooses from? Are these choices biased based on this selected hidden star chamber (the ones that pick the initial offering materials to the board), if you will? The 1 to 2 year innovative trends (and technologies) seem to be already maturing, (i.e., the group's picks are based on phase 2 innovations not pre-innovations).

References


IoT-A (2012). The internet of things - architecture. Retrieved from http://www.iot-a.eu/public/front-page.

Johnson, L., Smith, R., Willis, H., Levine, A., & Haywood, K. (2012). The 2012 Horizon Report. Austin, Texas: The New Media Consortium.

Thursday, July 12, 2012

CS855 - Week 1 - of things in different times and patterns


After reviewing some of the first week comments from CS855 posted by CTU class participants and replies from Cynthia and of the first Breeze class session (of which I was regrettably not able to attend while I drove across state highway 83 in South Texas which runs alongside the Rio Grande border, never more than a mile from that border - a sort of wild west frontier still), I am still amazed at the super progress of the hard sciences since the publication dates of the assigned books, in particular, The Fortune Sellers (his play on the phrase, Fortune Tellers). Again, this is not a direct criticism of Sherden (he writes for another period), but I keep on finding disagreement with many of his conclusions, especially his fatalism of the indeterminism of the social sciences, economics, and other fields of discipline that purport to model social phenomena. Things have changed dramatically in a little over one decade. In fact, by adding just a decade to the time periods of many of his criticisms of how scientists "got it wrong" during the 1950s, 60s, and 70s", they happened to have got it right after all. This is not just a coincidence. Because we better understand the technical aspects of complexity and of black swan and long-tail events, social scientists are better at describing phenomena and consequently many of their meta-projections, albeit nothing is perfectly tuned to the future because of quantum fluctuations, etc., and  notwithstanding the calamity of the banking crisis of 2008 and the heartbreak of 9/11, both events of which we could easily have been forewarned of using existing intelligence data and technologies at those time periods. However, Sherden, when he does mention quantum mechanics, uses it in a abusive manner - he doesn't quite understand that it describes microcosms and not so much macro-worlds such as social constructs at our scale. Scalability when used in terms of emergence in adaptive complex structures is another matter all together - it actually has technical meaning. Apparently, evolutionary patterns take place across scale boundaries in complex adaptive structures and this  is when things really take shape, so to speak, at the meso-scale level. I have been evangelizing this for two years now, since my work on computational information evolution started. Secondly, there is such a phenomena model called stochastic chaos and that type of chaotic structure is not deterministic. In fact, it is your worse nightmare in terms of modeling. It does not stop there. There is also quantum chaos as well. Quantum probability and stochastics is part and parcel to that chaotic model. So, chaos is not so clearly defined and even more interesting, hard to discern when trying to find out if a phenomenon is indeed a chaotic process. Time series analysis is usually done as a first step in order to approximate whether something approaches being chaotic.

On economic forecasting, Sherden dismisses entirely chaos as a tool for economics based on brief statements from Brian Arthur and others at the SFI. However the SFI understands the importance of meta-patterns introduced by the idea of chaotic processes as models. In particular, while Mandelbrot's development of a fractal chaos describing some economic processes such as the macro stock market dynamic is not a micro-definition, it nonetheless is instructive in finding economic meta-patterns. Stochastic modeling of prices in the markets is what quants and arbitragers do, especially in applying stochastic processes (usually Ito-type and variants of Black-Scholes models) to those markets and they all failed in predicting the banking and housing crisis of 2008 and other micro movements, but that was not the fault of the mathematics per se. Incomplete information mixed with a larger than expected uncertainty in synergies, plus the mob irrationalities of all of us trying to get things we want instead of things we need contributed to these mistakes. That is essentially the point that Nassim Taleb (he of black swan fame) was trying to say. These sub-phenomena are all capable of being modeled to more accuracy now, as well as knowing the adaptive complexity of their compounded processes, at least in meta-patterning. Sherdan hand waves through all of these very hard scientific methodologies and pronounces them impotent in one wand movement. Not even Taleb (a former math quant) does this.

I propose that futurists, when endeavoring to prognosticate based on projections of current technology, as Sherden correctly asserts, use situation bias to limit themselves - must look beyond their own skins, their world and beyond yet, their dreams. The decade figure I threw out before seems to point to a perceived future lag in these situation biases. Technical futurists, like Michio Kaku recognize these lags and have adjusted some of their prognostications. That is probably a good idea to get projections more in the "ball park" or realm of possibility. I like the proposal that the late great sci-fi writer and chemist Issac Asimov had about developing a social physics that looks at history paths as patterns and not so much as data for predictions. I think that patterns are more important than data or statistical analysis per se. Complexity is really about non-linear dynamics and that dictates that fluctuations (quantum or not) introduce too much uncertainty in the beginning or while a process is alive, to make any sort of precise determination of pinpoint future events. Patterns, on the other hand, are a way of categorizing what could happen and therefore, what risk portfolios we might want to adapt. We should all have our own built-in insurance plans.

Sherden does pay proper homage to Karl Popper's foundational work on complexity, society, and technological advances. But, again, Popper could not have possibly foreseen the progress of science in a century. Einstein thought that the hardest problems are and will continue to be those of individual human movements. Societies are built out of these atoms, but structures, as adaptive as they may be, display patterns and again, that is the key to "seeing through and beyond time". Which brings up my most important point about the future of futurology - time dilation or the redefinition of time will make all this a moot subject because then it is just a matter of running continuous time lab experiments - we are all, again, rodent petri dishes.

On the point of what forces are (those words ending in "al", although mostly anything could be turned into such things), should we not define them more dynamically and precisely, otherwise, they become too general without precise meaning and purpose? I like to look at forces in the physical sense. These are precise quantities when things are measurable. Forces can then be described by fields which are another very useful, ubiquitous, and beautiful abstract construct of mathematical physics (not the mathematical object, although the two could be woven into a single overarching abstract entity). Humans and social groups cause forces by the intermixing and synergism of different actions and these actions could then be quantified to an approximate extent.

CS855 - The beginning, a futuring tool - Futuring a General Stochastic Non-Frege-Aristotelian Hypercomputer Universe


Conventional wisdom (whatever that may be) is that futuring a world, much less a universe, is a fool's paradise extraordinaire. Despite taking into account non-linear dynamics (chaos), stochasticity, and non-Frege-Aristotelian many-valued logics, one is trapped by extrapolational biases and mindsets - they are in our neural fiber. However, also in our wetware, is a propensity to develop scenarios for transformational paradigms - thinking beyond the realm of senses, dreams, and gods. Futuring is our anthropomorphic way of dealing with this cranial catharsis.

My proposed manner in which to perform futuring is through the use of hypercomputation using history paths as inputs and Bayesian stochastic operators as controllers (gates, junctions, and other advanced computational operations). Non-Frege-Aristotelian many-valued (or continuous-valued) logics which subsumes quantum probability, fuzzy logic, belief systems, possibility spaces, and other uncertainty regimes can then play the role of interchangeable (hybrid) logic frameworks for these operators. Information chunks as atoms for the formation of history paths is extended to represent generalized uncertainty bits - gbits. Hypercomputation is visualized as general machines that take into account physical universes in quasi-casual frameworks - causaloids. Abstractions such as quantum, relativistic, quantum-gravity, super-Turing, von-Neumann, Godelian, Zeno, and UTMs (universal Turing machines) machines are instantiations of this computational framework calculus.

A segue to ... CS 855 Socio-technical Futuring: