“ROS (Robotic Operating Systems) Everywhere!” says Willow Garage’s Steve Cousins
by Frank Tobe, Owner/Publisher of The Robot Report
Willow Garage’s ROS (Robotic Operating System) provides a collection of software libraries and tools to help software developers create robot applications. ROS provides device drivers, visualizers, message-passing, package management, and advanced libraries to help application engineers understand camera, video and 3D data.
ROS is open source and free to use, change and commercialize. The system is used by a growing number of popular personal service and research robots including one at the University of California in Berkeley (shown in the picture above) that learned how to process a basket of laundry from washing to folding. Gerkey believes ROS will allow entrepreneurs to create new commercial applications for robots even if they don’t have extensive robotics expertise. Gerkey said in the write-up about his Technology Review TR 35 award, “The goal [of ROS] is to help people who have ideas for what robots can do in the marketplace by providing a common language for robots.”
The industrial robotics industry is confronted today with the modification of production processes due to the trend toward individualization of consumer products. This requires that handling of robots be much easier, with greater flexibility and rapidity, and that accuracy has to be increased.
Thus it was a big leap forward last week when Yaskawa America’s Motoman Robotics Division signed a collaboration agreement with the Southwest Research Institute (SwRI) to port Willow Garage’s ROS to the Motoman line of industrial robots. This is the first authorized porting of ROS to an industrial robot. SwRI plans to develop, demonstrate and release to the open-source community an interface between Motoman robots and ROS thereby taking this award-winning software beyond the realm of universities and research and into the world of business.
SwRI (Southwest Research Institute) is an independent, nonprofit, applied research and development organization based in San Antonio, Texas, with more than 3,000 employees and an annual research volume of more than $500 million.
Yaskawa America Motoman Robotics division offers arc welding robots, spot welding robots, painting robots, handling and other robots. It’s most recent two-armed handling robots are being implemented in the automotive industry in Germany and Japan.
W. Clay Flannigan, Manager of the Robotics and Automation Engineering Section of SwRI, said:
We are working to build a general purpose interface between the broad manipulation and perception capabilities of the ROS framework and the highly reliable architecture of industrial robots. We plan to implement the interface at a low-level within the existing robot controller that enables the capabilities of the ROS manipulation stack, while maintaining the safety inherent in the industrial controller. By providing the solution as open source, we hope to build a community around the use of ROS in a wide variety of industrial applications. Ideally, the community will expand to encompass more robots, sensors and industrial controllers, and we hope to contribute to the process.
We plan to release the source in the first quarter of 2012.
Erik Nieves, Technology Director for Yaskawa America’s Motoman Robotics Division, explained why Yaskawa America is pursuing an open source controller interface for it’s Motoman line of robots:
Yaskawa’s strategy is to offer many controllers for the many different audiences and applications that our robots address. This ROS adaption is in line with that strategy.
The next step for industrial robotics is to be more sensor aware; to be able to accomodate the many new capabilities showing up in the service sector. It’s clear that ROS is able to handle all of these and it saves our programming department from writing drivers for each and every possible configuration. We want ROS for these next generation devices which will come first to ROS.
A near-term goal of the project is to demonstrate advanced material handling solutions that leverage the path planning, grasp planning, and perception frameworks within ROS to enable robotic solutions that would be difficult or expensive with current solutions. One can only imagine the longer-term future. Perhaps ROS could become the universal robot controller that most end users wish for. Perhaps the days of the clunky teaching pendant will soon be replaced by an iPad or tablet running an ROS applet.
Willow Garage’s CEO Steve Cousins, in addition to providing the colorful title to this article (“ROS Everywhere!”), commented on the import of the SwRI Motoman ROS porting project. He explained that current-day industrial robots often don’t need the extensive vision, mobility and navigation capabilities available in the growing world of service robotics. But vision and navigation systems are the next level in the evolution of industrial robotics as they branch out of the automotive industry and into all the other areas of production and material handling, and ROS is a good entry system to program, simulate and implement these new industrial and material handling applications using all the new navigation and vision features.
Yaskawa’s Motoman line of robots will, by using ROS, at no significant research cost on the software side, be able to add features to their existing robot manipulators enabling them to compete in terms of handling new manufacturing processes. ROS and Willow Garage are getting a boost to the credibility of ROS by this very real and timely proof of concept. Also getting a big boost are the many industrial integrators who add a wide range of industrial expertise to the ROS community thereby making both groups stronger.
Article courtesy of The Robot Report
DNA Computing: What Will the Programing Language Be?
By Peter Marino
After interviewing Dennis Shasha about DNA programming it got me thinking. What will the DNA programming language of the future be? The idea of using the Watson-Crick building blocks to assemble a functional program is not new. It is, after all, the way organic life forms function. The distinction between the inorganic computers we use today and the organic computers of the future is simply the medium used to calculate or compare two values.
Peeled down its bare essence, all digital computers do is one of two things: Either add two values together or compare two values and determine if they are the same.
Inorganic computers are by definition rigid structures. Various metals and semiconductor materials are bonded to a non-conductive silica substrate to form the component parts of the arithmetic, logic and memory along with the necessary supporting circuitry. Organic computers appear to be self-supporting structures, in that the framework is integrated with the computational units, analogous to comparing a static office building to a kinetic sculpture.
Initially, any nucleotide computer will need to be “manually assembled,” with the first functional parts becoming the tools to build ever more sophisticated organic devices. The mechanics of attaching Watson-Crick pairings into various forms to replicate the processes of AND gates, OR gates and NOT inverters for logic circuits and calculation engines are still speculative. Each model, however, has methodology for sculpting strands by coupling and decoupling assemblers. A compiler will be required to check the DNA coding to ensure problematic combinations of the Watson-Crick pairs have not been created inadvertently. Because of the nature of the limited number of pairings, it seems unlikely that any interpreted version of DNA coding is probable. However, it’s only a matter of time before true artificial intelligence is capable of passing the Turing test.
The interface between the coder and the DNA engine is all speculative at this point. Researchers have been focusing their attention on the methodologies of connectivity between the component parts of nucleotides to establish a solid foundation for supporting DNA sculptures. Much as the initial research for LASER technology was focused on creating a beam of coherent light, the context of the frame was built before any attempts were made to modulate the beam to carry digital traffic. The same is true of the current state of the art in DNA programming.
A more apropos analogy can be made regarding the development of computer languages. The microcode or basic instruction set had to be established and implemented before object code could be assembled to form a program capable of performing a task. As the nature of computer languages, higher level languages were developed to enable coders to perform tasks of ever-increasing complexity. DNA and nucleic programming will follow a similar development cycle.
The future of DNA coding is bright. We already know the concept works, because organic life is a massive series of DNA programs running concurrently and independently of each other at the nanoscopic level, yet capable of complex interactions at macroscopic levels. The proof of concept has already been amply demonstrated by the hundreds of thousands of living species. The question isn’t “Can DNA be used as a computer language?” It is rather “How can we write DNA code that is capable of achieving productive results?”
Today’s DNA researchers are already at work, with several proposals for handling the mechanical aspects of assembling nucleotides. It is only a matter of time and concentrated effort that stands between us and addressing DNA programs. Stay tuned for quantum leaps to be made in the realm of DNA programming and DNA computers as it will probably be the most natural module for integrating technology into humans.
Download the pdf of this article here.
Peter Marino has been a lifetime science enthusiast with careers in fitness, web design and as a freelance writer for technology, web design and marketing magazines. He is currently the Senior Partner and CMO of reelWebDesign.com. He can be reached on Facebook at: Facebook.com/PeterMarinoShares and on his SwarmKnowledge Facebook fanpage or on Twitter: @reelWebDesign.
Interview with Arthur Nishimoto,
Creator of Fleet Commander
by Peter Marino
Arthur Nishimoto, is graduate student at the University of Illinois at Chicago’s Electronic Visualization Laboratory. Arthur is no ordinary student, however because he has developed a wall sized touch screen strategy game called Fleet Commander which is based on the Star Wars story. In the game multiple players are separated into two teams that take control of X-wings, TIE fighters and Death Stars, all with a touch of their fingers. Arthur has been in the news a lot lately so I thought I’d better interview him now before he’s too important for me. This is a visionary student that will seize technology to make a more interesting future so keep your eyes on him.
Peter: How did you come up with the idea for Fleet Commander and why did you want it to be on a huge touch screen?
Arthur: The basic idea for Fleet Commander was one of several game ideas I had come up with while taking Dr. Jason Leigh’s video game design course during spring 2009. For that class, we were in groups of four and had ten weeks to develop a game from scratch for a multi-touch table built by the Electronic Visualization Lab (EVL) called TacTile. We ultimately decided to make a foosballs game for that class. After the class ended, I was hired as an undergraduate researcher at EVL by Dr. Leigh to continue exploring multi-touch interfaces.
I wanted to see what kinds of gestures and interfaces could be better suited for gaming on a multi-touch table. What began as a simple widget you could drag and rotate around eventually grew into Fleet Commander. When EVL had setup a touch overlay to go over the existing 20-foot LCD wall, I figured why not port Fleet Commander from TacTile’s 1920×1080 resolution to the wall’s 8160×2304.
Peter: How long did it take you to program a game like this?
Arthur: The bulk of the programming took about six months, although I’ve been working on and off since summer of 2009.
Peter: What programming language did you use to make it?
Arthur: Fleet Commander was created using Processing (http://processing.org/), an open source programming language based on Java.
Peter: How did you get your school to invest in the huge touch screen, or did they already have it?
Arthur: EVL has a history of building large tiled wall displays and exploring interaction techniques in a large display environment. Since my primary research work revolves around multi-touch, when the lab decided to buy a 20-foot wide multi-touch overlay from PQLabs, I was heavily involved with testing and debugging the overlay since this was the largest touch screen PQLabs had ever produced.
Peter: Besides school, was there someplace else you learned how to program?
Arthur: While I knew some programming in Basic prior to attending college, most of what I learned was at UIC.
Peter: Where do you see the future of gaming in 5 years and beyond?
Arthur: I think its amazing to look at how gaming has become more interactive these past years. Going from buttons and analog sticks to Wii, Move, and Kinect. With Sony adding 3D, Microsoft with hand-free tracking and speech recognition, and the Wii U extending the gaming experience beyond the TV screen I think gaming will continue to move toward a virtual reality-like environment.
Peter: Do you ever envision a day where we’ll have technology similar to the holodeck on Star Trek?
Arthur: Technology like the CAVE, lifelike avatars, 3D movies, and gaming is continually moving us toward something that one day could be as immersive and interactive as a holodeck.
Youtube video of Fleet Commander; tabletop version and wall version.
Peter: I’m glad we have people like you to make it happen!
Interview conducted by Peter Marino, the Chief Science Officer and founder of SwarmKnowledge.com. He’s also a web designer, online marketer and freelance writer on many topics. You can follow Peter @ www.facebook.com/PeterMarinoShares on Twitter @reelWebDesign. Be sure to become a fan of SwarmKnowledge @ Facebook.com/swarmknowledge.
For a pdf of this interview click here.
Table top version
For a customized content marketing campaign visit our sponsor.
An interview with Dennis Shasha about his latest book ‘Storing Clocked Programs Inside DNA.’
Peter: So, Dennis, you’ve written this book called “Storing Clocked Programs Inside DNA: A Simplifying Framework for Nanocomputing”
with Jessie Chang.
What do all these words mean?
Dennis: Well, DNA Computing has been around since my colleague Ned Seeman realized that he could built stable non-linear DNA in the early 1980s. Seeman and others have built remarkably complex geometrical structures — millions of them at nanoscale — since then.
They have also built nanoscale robots that can push particles around, assemble circuits and the like.
Peter: Len Adleman also did some cool work.
Dennis: Right. He showed that a simple instance of Hamiltonian Paths could be solved using DNA. After that, many people looked at the massive parallelism of DNA computing and thought they could build electronic ones. I disagreed from the beginning.
Peter: Why was that?
Dennis: Because the combinatorial explosion still bites you.
If you want to solve a 100 city Hamiltonian path problem, you have to consider a potential 100 factorial (100!) paths. That’s a number that’s larger than the number of particles in the known universe.
Peter: Ok, but you’re not a chemist. What do you bring to the party?
Dennis: Let’s say we want DNA to do some complex task like recognize one DNA strand out of many and light up some fluorescent markers.
If you have say two marker colors and 30 possible strands, then you will want to do this in phases.
Say red in the first phase, blue in the second, then blue, then blue, then red.
That is, you want a binary readout in time.
Peter: Yeah, that’s easy to do on a computer.
Dennis: Exactly, because computers have clocks.
DNA computing until now has not had clocks. Instead experimenters or robots poured appropriate strands into a DNA mix at just the right times.
We wanted to (i) store programs inside DNA and (ii) run them based on a clock.
Peter: So how could this work in practice?
Dennis: In practice, an application designer would build a DNA program that consists of a sequence of DNA strands (sequences of A, C, T, and G) perhaps with if conditions and while loops. Our method attaches this to a scaffold.
(A very clever rising sophomore Aidan Daly managed to make this work in two months last summer.) This program can be shipped around the world.
Peter: Who executes this program.
Dennis: The recipient loads the DNA into solution and then has a robot that pours in two strands that we call tick and tock.
Those strands peel instructions off of these scaffolds and the DNA computation then unfolds.
Peter: So, what new opportunities does this open up?
Dennis: Any computations that require phasing or branching or looping.
This could mean pathogen detection or nanoassembly.
Peter: What are the challenges?
Dennis: Our DNA scaffolds have few instructions but there are millions of scaffolds.
So we are talking about short fat parallelism. Compiling useful computations to that model and avoiding barrier synchronizations are just some of those challenges. So, on the computer science end, compilation; on the synthetic biology end, fast assembly of programs.
Dennis Shasha’s book can be found on Amazon at: http://amzn.to/mddjb9
Interview conducted by Peter Marino, the Senior Partner and founder of reelWebDesign.com. He’s also a science and technology aficionado and freelance writer on many topics. You can follow Peter @ www.facebook.com/PeterMarinoShares and on Twitter @reelWebDesign .
Download the pdf of the interview here.