We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.


Nanowarehousing: Nanotechnology and Data Warehousing Form versus Function

Originally published June 30, 2004

The focus of this article is to present and discuss the form versus function debate that is silently considered at the macro-level of computing.  This is an in-depth discovery discussion. not an overview.  If you have not done so, please read my first article entitled “The Nanotechnology Revolution on the b-eye-network.com website.  If we reduce the concepts to a micro level (Nano-level) we arrive at a conclusion that form, function and data must be combined through electrical signals and concentrated structures such as Nano-tubules and Quantum Dots.

This article is a discussion of “what-if?”  What if we brought together form and function at the macro level?  What kinds of tools would we require?  Could these systems begin to be a macro-level tool set that can be converted to the Nano or atomic level control?

Form versus Function

Separating form from function has been a boost to productivity and has produced terms such as “Data driven functions”, portability, re-usability and one of the major standards in common process definition: UML and Object Oriented Programming (OOP). 

Why combine and bind form and function?  Are we suggesting that the pendulum has swung back the other way?  Absolutely!  We also need to consider the current convergence of IT, Business, Systems, and Data.  While it probably won’t ever get to full convergence, it’s headed in that direction.   In order to understand this mode of thinking, let’s start by examining sheer computational power.

“To solve a 64-bit encryption key for today’s Digital Encryption Standard (DES), a Pentium-class digital computer pulling 2 billion calculations per second (2 gigaflop) would require 2^64 or 1.84 * 10^19 operations, or 292.5 years.  By contrast, a 64-qubit quantum computer could solve that for the same key in a single operation.”  Hacking Matter, Will McCarthy

What is a qubit quantum computer?  It is a cellular structure of atoms holding electrons that spin in multiple directions.  The difference between these atoms and standard atoms, is that these atoms have no nucleus.  They serve only as a structure for spinning electrons.  The other part to a qubit is that it can hold a 1 or a zero, or a superposition of states – holding both values at the same time.  This poses interesting states, truly parallel operations using opposite data sets, at the same time.  If we could program such a device to house historical information, keyed information, and place artificial intelligence on that level – fully parallel discovery and neural networking are a full possibility.

We might be able to slowly see the increase of understanding in solving the contextual awareness problems that exist today in learning hierarchies.   This is true, if we have structures supporting them that resemble either DNA, or neurons and synapses.  Operations in parallel at this level would make today’s computers seem like an on-off light switch with a dimmer control.

Still more computational power: qubits are housed within quantum dots; quantum dots act as an electrical substrate that causes electrons to emanate as standing waves (rather than by passing electrons from atom to atom – which produces heat across conductive materials). 

The Nanowarehouse is a series of these Qubits or quantum dots that house information based on the various electrical signals they receive.  They can re-program themselves, release their payload, and change from solid to liquid on command.  Can they create “real atoms” like gold?  No – currently they re-arrange existing atoms by shifting electrons, which does not allow these atoms to spin around a nucleus.

In order to store historical information the Qubit would spin the electrons a certain way.  However, qubits can be super-positioned.  What do they represent?  By combining a series of rules that define what they mean, when they mean it, these compounds can be the instructions that will control how the qubit is interpreted when queried.  They can determine with what chemicals they are interacting with and change their composition to respond accordingly. 

Take a leap of faith for a minute and consider that the nature of our very existence is based on a combination of form (body), function (mind), and information (history – housed in the brain).  Are not our body and mind both form and function?  Are not our senses and motor control our methods of interacting with the outside world – and furthermore, are not our choices based on decisions within our own scope of knowledge and understanding?  If these are indeed the case, then it makes sense to complete form with function on a quantum level.  The form is quantum dots, Nanotechnology.  The function is neural net programming with an historical data store.  The interaction senses the programming elements that describe what to do when it “runs into” another dot, or another series of chemicals, or electrical signals.

What about the world I live in today?

RDBMS engines are a key technology in housing both transactional or time-based transitional data and historical or warehoused data.  There are functions such as ETL, BI, Analytics, cleansing, quality, and data mining outside the information stores.  RDBMS’s are wonderful tools to provide structure, and form – connecting the information at base levels – determined by key structures.  Functions such as data mining are becoming more prevalent and use neural net functions, market basket analysis, predictive analytics and learning pathways to redefine the utilization of the data.

The databases however do NOT define what architecture the structure must take in order to achieve massive parallelism nor massive information storage.  Nor do they define the functions of input/output, interaction or integration of what the data means.  There are rules regarding access – but the rules of accessing data are “free-form”.  When we think about Nanotechnology, the atomic layers of chemical reactions and the integration of electricity must be considered.  There are very specific scientific principles that govern the electron and proton flows within the atomic layers themselves.

What if one could define a scale-free architecture at a macro-level that could be wrapped with functionality?  Preset form is indicative of the function of this information and how it interacts with other elements.  Object-Oriented Programming was the first step in this direction; however, it didn’t go far enough.  As with any invention, additional steps are necessary to progress to the next level.  The Data Vault™ may be one of the viable architectures, possibly allowing us to begin applying hypothetical and theoretical test cases against atomic level structures such as Quantum Dots.

What about the Structure of our Information? How does that have to change?

In any good scale-free architecture there exists multiple elements of keyed information.  There is information that stands alone and information that doesn’t require anything else to be understood.  For example: 12 December 1989 – does that ring a bell for anyone?  It might – it’s just another date in the calendar of 1989.  Everyone can identify with the date, for some it may be a birthday, for others it’s an anniversary, and for still others it’s just a day on the calendar.  The point is: it’s the same date, but it’s interpreted differently by different individuals.

Where does your interpretation come from?  Memories?  Smells?  Sights? Sounds?  You’re using your informational processing engine (the brain) to process the data – based on a single key.  Single keys don’t mean much without interpretation or context.  Just like electrons spinning in free-space, unless they’re assembled and interpreted according to rules – they can be meaningless.

Within the Nanowarehouse, we must surround these structures and informational stores with instructions on how and when to interact based on certain criteria.  I’m suggesting that these quantum dots and Nano-architectures become “self-aware” at least enough to understand the payload they are carrying.  We are now back to the point of structure, form versus function.

We need more definition for this research as to how to capture, encode, and control a Nanowarehouse.  In this case, we should not ignore the structure known as DNA (deoxyribonucleic acid).  DNA holds the keys to our definition (hair color, height, weight, and abilities to perform).  It also holds very interesting functionality (amino acids and cell-motors).  When a cell divides, it unwinds a particular set of DNA - replicating it by building the atomic elements, and then re-winds the DNA.

What does DNA have to do with Nanowarehousing?

Everything!  When we discuss the very fabric of our existence, and how it captures form, function, and information – we discuss the possibility of storing historical information at an atomic level (referring to the ATOM level of data).  Computing at this level is indistinguishable from magic, particularly with the identification of Quantum Dots.

There are many questions to be addressed, such as: Isn’t one of the functions of the Data Warehouse to “predict” what will happen in the future?  Do we have a clear definition of what we want to predict?  Is one of the purposes of analytics to find out what customers will do when they initially engage a particular business?  These are all relevant questions that will affect predictive anomalies:

“Similarly, the classical ‘laws of physics’ are really just statistical observations – the averaged behavior of large groups of atoms, by these averages, like any statistics, lose their validity as the sample size decreases.  There is no ‘average particle,’ just as there’s no ‘average human being’ or ‘average hockey game.’”  Hacking Matter, Will McCarthy

Likewise, I would argue that sample sets that are too large are not granular enough to produce meaningful information because “outliers” within the data would have less statistical relevance.  We’ll return to this point, but for now, let’s address the issues of what will happen to the data warehouse.  In my prior article, I discussed the potential of a Nanohouse, otherwise known as a “true atomic level or Nano-scale data warehouse.”  The data warehouse will evolve, just like other necessary technology. 

History is nothing more than information, combined and stored in an understandable form.  What is the true nature of information?  We begin to understand and recognize the patterns of Electro-mechanical signals when combined at a macro-economical level.  So what is a Quantum Dot?  It is a shell of an atom that can be used to house different atomic level elements and can be changed at will.  This shell of an atom has no nucleus and when charged with certain atomic elements can act as a container.

What relevance does this have on my business applications?

As I stated in my last article, the future technologist or IT individual will have advanced degrees in chemistry, physics, and/or mathematics.  The technology for your business will be dependent on the technology that is available.  The technology MUST go to the quantum level to overcome the resistance problems of conductive electricity.  More to the point: businesses will be run on data collected from Nanotech atomic level components in near real-time (as fast as radio waves, or light waves can travel).

What else will happen?  Is it really performing business at the speed of light?  YES!  We will have reached the unrecognizable (unidentifiable to the human eye, ear, or nose) classification and dissemination of data.  RFID (radio frequency identifier tags) are just one way of tracking store shelf items.  For example, Wal-Mart just ordered over 30,000 of these tags.  Both businesses and government claim that these tags will be disabled at the time of purchase.  However, what if the RFID tag is not disabled and is being used for another purpose?  Then the RFID application will continue tracking the item anywhere it goes.

Soon data will explode and geo-coordinates will be linked with addresses for validity.  RFID’s are just one entry into the Nano-world.  From a business perspective consider a different spin on this topic. How does it affect your data quality, bottom line profits, and the buying habits of individual customers?  Does this concern you?

Conclusions and Summary

In the prior article, I raised the question about control over Nanotech or atomic level structures; I suggested the possibility that we’d have to think about new controlling structures, macro level programs and models that are mimicked at the Nano or atomic level.  This article is an attempt at just such a proposal.  I feel like I’m carving a Space Age tool with a rock and a stick! However it is just as important to start the new process of eventually controlling atomic structures as it is to investigate its application.

To be able to draw a parallel between these structures, compilers and interpreters (ETL, BI, SQL etc..) offer process definition of the function of how data will act and interact with other information in the system.  Databases offer the structural component of storage, definition, and base rules (referential integrity) for the information or data.  At a biological level, the brain stores both form and function.  The nucleus of neurons focus on information gathering, storing and registering, while the chemical balances and electronic signals produced travel along axons and dendrites to provide function to the structure.

As with any good futuristic theory I propose some methods which may or may not work through suggested experiments.  Unfortunately, I don’t have the proper equipment to completely implement the design suggestions.  It is clear that the Nanotechnology industry changes the view of the technological world completely.  We will need new tools, new engineering paradigms and new architectures.  We will be forced to combine form with function in order to overcome electromagnetic resistance levels and other laws of macro-physics.

In case you’re curious, or if you’re a researcher, or you wish to get in touch with me, I’d love to hear your thoughts, comments and feedback on this issue – both critical and thoughtful perspectives.  This is a research interest of mine.

  • Dan LinstedtDan Linstedt

    Cofounder of Genesee Academy, RapidACE, and BetterDataModel.com, Daniel Linstedt is an internationally known expert in data warehousing, business intelligence, analytics, very large data warehousing (VLDW), OLTP and performance and tuning. He has been the lead technical architect on enterprise-wide data warehouse projects and refinements for many Fortune 500 companies. Linstedt is an instructor of The Data Warehousing Institute and a featured speaker at industry events. He is a Certified DW2.0 Architect. He has worked with companies including: IBM, Informatica, Ipedo, X-Aware, Netezza, Microsoft, Oracle, Silver Creek Systems, and Teradata.  He is trained in SEI / CMMi Level 5, and is the inventor of The Matrix Methodology, and the Data Vault Data modeling architecture. He has built expert training courses, and trained hundreds of industry professionals, and is the voice of Bill Inmons' Blog on http://www.b-eye-network.com/blogs/linstedt/.

Recent articles by Dan Linstedt

 

Comments

Want to post a comment? Login or become a member today!

Be the first to comment!