Nanocomputing

Massively parallel computing on an organic molecular layer
"Here we demonstrate an assembly of molecular switches that simultaneously interact to perform a variety of computational tasks including conventional digital logic, calculating Voronoi diagrams, and simulating natural phenomena such as heat diffusion and cancer growth. As well as representing a conceptual shift from serial-processing with static architectures, our parallel, dynamically reconfigurable approach could provide a means to solve otherwise intractable computational problems."

The researchers made their different kind of computer with DDQ, a hexagonal molecule made of nitrogen, oxygen, chlorine and carbon that self-assembles in two layers on a gold substrate. The DDQ molecule can switch among four conducting states -- 0, 1, 2 and 3 -- unlike the binary switches -- 0 and 1 -- used by digital computers. [...]

Their tiny processor can solve problems for which algorithms on computers are unknown, especially interacting many-body problems, such as predictions of natural calamities and outbreaks of disease. To illustrate this feature, they mimicked two natural phenomena in the molecular layer: heat diffusion and the evolution of cancer cells.

So, it sounds like it's a 2-bit cellular automaton where each cell is composed of only 8 molecules (wow). But they said that each cell can route to ~300 others, so it must not be laid out in a hexagonal grid. My guess is that they way they are geting it to do anything is by training a neural net until it evolves custom solutions to whatever problems they have success-conditions for. But has anyone ever gotten an artificial neural network to evolve to something actually useful in the real world, or is it all still AI boondoggle?

Tags: , , ,

15 Responses:

  1. ultranurd says:

    I'm trying to wrap my head around this skimming the full text, and it sounds like the 300 molecule count refers to the scale of the entire bilayer grid in the ~20-25nm window. Each molecule still connects to 2-6 neighbors (which I guess would be hexagonal if fully connected?) depending on its state and the state of the neighbors, giving a total of 8 classes of circuit. They count two molecules as connected if the tunneling current between them is 60% of the peak detected in the entire grid.

    On top of that, some of CA rules appear to be able to operate on larger groups of molecules at ranges around 15 molecules and setting up some kind of directional bias on the grid, but I get pretty lost around here, mostly because I'm unclear on the relationship between circuit types and rules. I think their claim of hundreds of simultaneous executions comes from seeing the rules applied across their entire window by charges moving around?

    In general, it sounds like the computation is achieved by letting the states of the grid converge after setting up an input pattern of charges, and they've manually decided what a given distribution of states on the input or output mean. For the cancer example, they have to do multiple STM passes of "writes" to represent the differential equations they're trying to calculate.

    The practical application's biggest block is both the runtime I/O of how hard it is to do an STM pass, get the image out, and interpret it as a grid of states, and setting up the bilayer in the microscope in the first place.

  2. strathmeyer says:

    "But has anyone ever gotten an artificial neural network to evolve to something actually useful in the real world, or is it all still AI boondoggle?"

    Useful like... driving?

    • carussell says:

      Driving is small potatoes. Useful like "solv[ing] problems for which algorithms on computers are unknown".

      • pavel_lishin says:

        I'd hardly call something my neighbors are barely capable of "small potatoes".

      • jayp39 says:

        One of my professors made a neural net for predicting the stock market with startling accuracy (based on the graphs he showed us). You have to train it with a lot of data and you have to keep retraining it or it becomes inaccurate quickly, but it worked.

        The company he made it for (unnamed but supposedly investing billions of dollars every day) was uncomfortable with it because a neural net can't answer the question "why?" when it says buy or sell, so they had him build an expert system instead, which was less accurate but could actually show the rule chaining it used to make a decision. As to why he didn't use that for personal gains...well all I can say is he was one weird dude who obviously didn't care much about money.

        • owyn says:

          I don't mind money! I've got an in-my-head only idea for unleashing a neural net on forex. Just add bank account... :)

          As far as the original article, when I originally got into college, I had just read Greg Bear's "Blood Music" and this is the kind of stuff I wanted to do. I planned on being a Molecular Biology/CS double major or some such hybrid. That was way too hard so I switched to CS/Cog Sci pretty quick, which was more about the fun brain stuff. CS turned out to be pretty hard too so I just ended up getting a Cog Sci degree and now I'm a web developer. So in a way, avoiding work that is too hard pretty much sums up my entire career. Ha.

        • mattbot says:

          Sounds like you're talking about Ben Goertzel?

          • jayp39 says:

            Nope, Professor Mark B. Fishman although I don't think he was working alone. Sadly, since going on sabbatical he's been AWOL and nobody knows how to get ahold of him. And his name is fairly common so I haven't been able to track him down in the internets either.

        • cattycritic says:

          If only I had a dollar for every time I heard someone say someone they knew developed a way to predict the stock market. For all the people claiming they've developed such ways, you'd think --one-- of them would have been greedy enough to actually capitalize on it, and would have been so wildly successful as to be famous. I am extremely skeptical of your professor's claims.

        • cdavies says:

          Putting the two things together, we have an interesting question. Would you trust an algorithm that couldn't be proved formally correct, like one evolved in a squishy organic brain, to drive you anywhere?

          Humans suck at driving in the general case. They suck at it and, because they suck, hundreds of thousands of people a year die in road traffic accidents. It seems to me that the only reason we let people drive at all is because there's no better answer to the question of how to get from a to b, where b is more than 6 kilometres from a.

          Presumably, an organic machine would be better at it than humans would, due to it having sensory organs designed specifically for the task. But no matter how well it performed in general use, there'd always be the nagging doubt that some set of inputs would make it decide the best course of action would be to evel knievel over the median and plough headlong in to the massive semi coming in the other direction.

          Just how many people would the software be allowed to kill a year? And is that a judgement any software engineer wants to step forward and take?

          • jwz says:

            People who believe that algorithms that have allegedly been proven formally correct that are running on computers made of atoms will, in fact, do what they have been "proven" to do don't understand math, or software development, or physics, or all three.

  3. notthebuddha says:

    Like jayp39 said, NNs can treat market values as functions of various factors and predict changes, since they are universal approximators (given sufficient resources). This can be useful for eliminating proprietary algorithms in applications where some loss or error is permissible or lots of computer resources are available the train the network to match an oracle perfectly, which is a gain because the production network is fairly lightweight and might even be more efficient than the original unreviewed, obfuscated, and proprietary code.

    You need good data though. I've been researching edge-detecting applications, and the degree of error is less important than making sure your reference data covers a variety of situations; I got much better results when I used documentary photos with harsh variations in lighting and depth and material textures rather than "pretty" posed and lit portraits.

  4. lovingboth says:

    Depends on your definition of 'useful', but the best backgammon players in the world are neural net programs.