Journey into Nanohousing—Information Integration of the Future

Originally published June 9, 2005

The focus of this article is to present and discuss the fundamentals of DNA computing power and to bridge the gap into a futuristic look at Information Integration in the Nanotech world. Future articles will venture ever further into speculation about a Nanohousing device: what the models look like, what the technology looks like to build the models and how information (both data and programmatic functions) are necessary to manage the ultimate analytical engine. These are based on bio-mechanical DNA structures. This article is focused on the abstraction of DNA computing power used to produce the world’s first Nanohouse—an integrated data store combined with functionality and recognition, inclusive of abilities to attach, detach (self-assemble, break-down, score) nodes inside of a massively parallel computing tree.

 Re-Introduction to DNA Computing.

In case you missed some of the first articles I’ve written, or if this is your first introduction to Nanohousing, let’s recap why DNA computing is important by explaining what it brings to the table. By the way, DNA computing is very real today. It has been proven to be effective based on multiple lab reports, DARPA experiments and even professional business journals. DNA computing is the ability to perform mathematical operations within DNA strands or across DNA strands in massively parallel functionality.

“The excitement DNA computing incited was mainly caused by its capability of massively parallel searches. This in turn showed its potential to yield tremendous advantages from the point of view of speed, energy consumption and density of stored information. For example, in Adleman’s model the number of operations per second was up to 1.2 x 1018. This is approximately 1,200,000 times faster than the fastest supercomputer. While existing super computers execute 109 operations per Joule the energy efficiency of a DNA computer could be 2x1019 operations per Joule. That means that a DNA computer could be about 1010 times more energy efficient. Finally, storing information in molecules of DNA could allow for an information density of approximately 1 bit per cubic Nanometer, while existing storage media store information at a density of approximately 1 bit per 1012 nm3. A single DNA memory could hold more words than all the computer memories ever made.”  (Process of Bio-Computing and Emergent Technology, 1997, Lila Kari.)

In order to construct our first Nanohouse we should establish some basic requirements under which it must perform. Below is the short list of such requirements.

  1. The Nanohouse must be capable of storing and retrieving data within the DNA strands.
  2. The Nanohouse must be based on Nanotechnology, be it DNA computing, or any other Nanotechnology that is available.
  3. The Nanohouse CANNOT and WILL NOT separate the form and storage from the function of that data. The Nanohouse requires that the programming to interpret, understand and communicate with other nano-computing devices are wrapped within the single computing functionality (such as DNA strands encapsulated with RNA and ribosomes for replication).
  4. The Nanohouse components may self-assemble, or unassembled either on-command or as deemed appropriate, by the logic within the DNA sections.
  5. Each component or DNA section is self-sufficient, and is in its own right, a Nanohouse.
  6. The DNA Nanohouse recognizes 4 “bit states,” A-C-T-G which today outweigh the On-Off (1/0) of the electronic circuitry. Superimposed bit-states can be represented by specific combinations of these base pairs.

There are many more rules by which the Nanohouse must abide; however, the list is too extensive to provide it here. The basic gist is that the Nanohouse must abide by bio-mechanical rules along with some twists to handle data, form and function all in the same space. Separation of content from form from function would spell disaster. Unfortunately in today’s world of warehousing and integration, this is exactly what we’ve done, which is why (I speculate) we cannot produce gradients of content and nature of importance, and also why (I speculate again) we have to use artificial means such as data mining algorithms to put the three back together again.

Bio-mechanical objects like DNA strands know and understand what their purpose is. They also house the data to accomplish that purpose—and furthermore, they house the programming or algorithms necessary to complete tasks. I hypothesize that if we are to make a truly knowledgeable or thinking machine, we must start with the building blocks, and begin with convergence of data, form and function on the molecular level. Hence, the nature of Nanohousing takes on new twists.

Other features of the Nanohouse

When we think about building the Nanohouse, we ask the questions: what other features will the Nanohouse have? What can it do? Where will it apply? And, why do we need one? 

The Nanohouse will operate in all-parallel mode all the time. It will contain the ability to build up (self-assemble) multiple Nanohouses for a larger context solution. It will also contain the ability to separate (disassemble) into its relevant parts so that new operations can be performed. The Nanohouse will also be responsible for understanding the request, knowing the information it contains, and answering whether or not it can service the request based on the information it contains. All Nanohouse elements will do this in parallel.

The following functions are available on a bio-molecular level; therefore they must be coded into the software/programming of the Nanohouse: separating, extracting, cutting (writing/updating), ligating (writing), substituting (writing/updating), and marking, destroying, detecting and reading. All of this must be coded under security rules that answer these functions:

  • Can I cut this DNA?
  • Am I allowed to update this portion of the strand?
  • Does the new data fit the existing DNA, or is it destructive? 

Most security programming will be inherent (implicitly specified) by the biological nature of the DNA strand itself. Certain types of enzymes and encoding schemes won’t or shouldn’t be allowed. Thus it makes it virtually impossible to construct an encoded virus that affects the system.

What About Errors?

Errors occur even in natural systems. However, most natural systems have the ability to spot the errors and reduce or eliminate them. Think of our immune system—a set of DNA structures with a specific function to generate antibodies that eliminate viruses and other bacteria within the body when the body gets sick. This means the Nanohouse should assemble different kinds of clusters with different programming. Certain types of programming in the Nanohouse will roam the solution to find rogue nanites (new term) and destroy them. Other types of programming will serve as a nervous system. Still others will serve as an adaptation or experimentation lab (in which modifications to the DNA and its coding can be tested for viability). Finally, other types of programming will serve as construction of context and hypothesis—a thought lab if you will.

So what about errors? The biology of nature solves this problem several ways, one of which is massive redundancy. Where the Nanohouse is concerned (because of the small aspect of the DNA strands, and because of the nature of reduction of heat) it is possible to have redundancy in the DNA computer, or in the Nanohouse. In fact, it’s not only possible—it’s required. Another mechanism is error correction instructions built right into the DNA strand itself, thus allowing checks and balances for all the operations that take place within a single Nanohouse.

Errors will be handled in a multitude of ways. In the Nanohouse lab area (mentioned above), errors will be tested for survivability and applicability. Errors become the hypothesis of evolution of the DNA strands throughout the Nanohousing global community.

Conclusions and Summary

As with any good futuristic theory, I propose some methods which may or may not work. What is certain is that DNA computing is already well on its way. I urge you to read more about it, particularly in the area of biomechanics and bioinformatics. Other forms of Nanotechnology are also coming to the forefront such as carbon nanotubes and carbon nanowires, along with man-made atomic molecules. However, today the DNA computing device shows the most promise for the way we want to apply it.

You can read about a joint effort here.

In the next article we will dive into the risks of creating a DNA computing device (Nanohouse), and discuss some of the benefits that have already been seen through experimentation. Other articles will continue to explore Nanohousing until we have explored the hypothetical perfect-world “Data Warehouse in DNA” concept.

  • Dan LinstedtDan Linstedt

    Cofounder of Genesee Academy, RapidACE, and BetterDataModel.com, Daniel Linstedt is an internationally known expert in data warehousing, business intelligence, analytics, very large data warehousing (VLDW), OLTP and performance and tuning. He has been the lead technical architect on enterprise-wide data warehouse projects and refinements for many Fortune 500 companies. Linstedt is an instructor of The Data Warehousing Institute and a featured speaker at industry events. He is a Certified DW2.0 Architect. He has worked with companies including: IBM, Informatica, Ipedo, X-Aware, Netezza, Microsoft, Oracle, Silver Creek Systems, and Teradata.  He is trained in SEI / CMMi Level 5, and is the inventor of The Matrix Methodology, and the Data Vault Data modeling architecture. He has built expert training courses, and trained hundreds of industry professionals, and is the voice of Bill Inmons' Blog on http://www.b-eye-network.com/blogs/linstedt/.

Recent articles by Dan Linstedt



 

Comments

Want to post a comment? Login or become a member today!

Be the first to comment!