We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.


Blog: Dan E. Linstedt Subscribe to this blog's RSS feed!

Dan Linstedt

Bill Inmon has given me this wonderful opportunity to blog on his behalf. I like to cover everything from DW2.0 to integration to data modeling, including ETL/ELT, SOA, Master Data Management, Unstructured Data, DW and BI. Currently I am working on ways to create dynamic data warehouses, push-button architectures, and automated generation of common data models. You can find me at Denver University where I participate on an academic advisory board for Masters Students in I.T. I can't wait to hear from you in the comments of my blog entries. Thank-you, and all the best; Dan Linstedt http://www.COBICC.com, danL@danLinstedt.com

About the author >

Cofounder of Genesee Academy, RapidACE, and BetterDataModel.com, Daniel Linstedt is an internationally known expert in data warehousing, business intelligence, analytics, very large data warehousing (VLDW), OLTP and performance and tuning. He has been the lead technical architect on enterprise-wide data warehouse projects and refinements for many Fortune 500 companies. Linstedt is an instructor of The Data Warehousing Institute and a featured speaker at industry events. He is a Certified DW2.0 Architect. He has worked with companies including: IBM, Informatica, Ipedo, X-Aware, Netezza, Microsoft, Oracle, Silver Creek Systems, and Teradata.  He is trained in SEI / CMMi Level 5, and is the inventor of The Matrix Methodology, and the Data Vault Data modeling architecture. He has built expert training courses, and trained hundreds of industry professionals, and is the voice of Bill Inmons' Blog on http://www.b-eye-network.com/blogs/linstedt/.

http://sawww.epfl.ch/SIC/SA/publications/SCR02/scr13_page23e.htmlThe Nanohouse computing device is still just a dream today, and it may be bound to stay that way for some time. It never hurts though to explore the "what-if" side of things. In this blog entry we explore the advances made in DNA computing and self-assembly. Self-assembly is an important part to nano scale machines. It provides the ability to produce consistent, repeatable (and ordered) circutry. These patterns are the very foundation of the Nanohouse large-scale data capture and modeling efforts.

"This stuff is coming," Uldrich says, "and it's coming a lot sooner than many people believe." ComputerWorld.

Molecular electronics is one of the most promising directions in nanotechnology [1]. The building blocks of future molecular electronic devices could be specially designed organic molecules assembled on appropriate substrates into useful circuits through the processes of self-assembly, i.e. the spontaneous organization of the molecular building blocks... SuperComputing Review Publication.

My hypothosis:
The larger systems get the more order they must have - or they become unmanagable, unwieldy, and begin behaving badly.

For example, consider the initial construction of the automobile. When Henry Ford sat down and thought about the problem of "mass production with consistent quality", he came up with a revolutionary system: build all automobiles the same way every time - that answers the quality side of it, and then add repeatable and redundant tasks along a series of checkpoints - voila the assembly line.

What do you think would have happened to the creation of the automobile if he had said: build 100 autos a day, everyone needs to be an expert in their field - and build their own car from bottom to top (without an assembly line)?
Chaos would have ensued, and his factory probably would have fallen apart from all the mistakes that were made. No single individual could have been an expert in every aspect of building the car. Consistency, repeatability, and order are the keys to automation - and thus self assembly of the nanoscale warehouse.

"The concept of a mass-produced structure with dimensions measured in atoms helps explain why researchers are turning to nanotechnology as the next great hope for Moore's Law..." ComputerWorld

The nanohouse is relies strongly on these principles, in fact so strongly that it forces us to rethink the way we compute, store, and utilize information (data). Data models that represent 2D space are no longer enough. We must concentrate our efforts on 3D modeling and learn from the molecules involved in the nanoscale calculations.

Example: "Another important simplification is made when the interaction of valence electrons with the electrons of the inner electronic shells of atoms is described by effective atomic pseudo potentials."

Lets paraphrase and over-simplify as we apply this to the nanohouse:
"Another important simplification is made when the interaction of [two or more business keys] with the [business keys of other elements] is described by [relevancy and frequency of relationship] potentials." The business keys provide the unique reference points into the information housed within the nanoscale devices.

The job of the nanoscale devices are to:
a. understand the data they carry (have some knowledge as to what would constitute a weak or strong bond i.e. relevancy)
b. understand what other nanoscale components they are allowed to "connect with" or self-assemble to.
c. propel themselves through the environment looking for other elements to attach to.

The results would show incredible ability to form "memory like" structures hopefully one baby step closer to the human brain functionality. It would have the capabilities of re-wiring itself by changing the self-assembled structure, or by being stimulated by an outside charge.

Let's examine this from a scientific perspective as it relates to the modeling necessary to represent a system like this:
"The most demanding parts of the calculations are i) the fast Fourier transforms (FFT) needed to evaluated the total charge density in real space and ii) the scalar products between wavefunctions, which are necessary to enforce orthogonality between the orbitals. Both operations can be efficiently parallelized [4] so that the overwhelming majority of the operations are performed locally on each processor through calls to optimised library routines (matrix-matrix multiplications (MMM) and one-dimensional FFT), while a carefully written proprietary three-dimensional (3D) FFT routine assures that the communication overload is minimized during grid transpositions." SuperComputing Journal

We must change our "data modeling" skills into biomechanical modeling skills. Why is this a big leap? Why is it so important for our success moving forward? What impact does it have on the Nanohouse of the future?

"Information and algorithms appear to be central to biological organization and processes, from the storage and reproduction of genetic information to the control of developmental processes to the sophisticated computations performed by the nervous system. Much as human technology uses electronic microprocessors to control electro-mechanical devices, biological organisms use biochemical circuits to control molecular and chemical events. The ability to engineer and program biochemical circuits, in vivo and in vitro, is poised to transform industries that make use of chemical and nano-structured materials." California Institute of Technology

What we need to address NOW is our primitive thought processes. It's time to think outside the box - time to expand our horizons. Can we get a Data Modeling tool vendor to finally come to the table and offer 3-D modeling based on variances, strength of bonding (associative properties), and relevance? If we can build some of these attributes into our respective data models - that's one step closer to the nanohouse. Of course there are hundreds of miles to go before we get there. The modeling is where it starts; from there we can begin to focus our efforts on the programmatic shifts that must take place.

Additional blog entries will continue exploring the nanohouse along with the notions of DNA computing, and self-assembly. We will explore the notions of the hypothesis stated earlier and work at uncovering what happens to a system when it expands beyond order.

Seen any interesting nanotech articles lately? I'd love to hear about them. What's your view on Nanotech, DNA computing, Information Modeling? Sound off!


Posted September 13, 2005 6:03 AM
Permalink | No Comments |

Leave a comment

    
Search this blog
Categories ›
Archives ›
Recent Entries ›