RSS

Search Results for 'iq'

NASA: LIQUID WATER FOUND ON MARS

Click to Visit CBS This MorningThis week NASA announced that it has discovered a source of free-flowing water on the surface of the planet Mars, a breakthrough finding that could forever change how human beings view our celestial neighbor. Famed physicist/futurist and CBS News science contributor, Dr. Michio Kaku told ‘CBS This Morning’ that, with this, NASA may have “hit the jackpot.”

Public fascination with Mars has increased ever since NASA launched its unmanned rover Curiosity, which continually sends back images of the Martian landscape. Once thought to be too hostile to support life, the discovery of liquid water alters our understanding not only of the origins of the red planet but in the future potential for an eventual human presence on Mars. “It changes everything,” said Kaku, “It means that this liquid water can be used for, perhaps, irrigation, drinking water, and even rocket fuel.” WATCH NOW!

New technique allows fast printing of microscopic electronics

A new technique for printing extraordinarily thin lines quickly over wide areas could lead to larger, less expensive and more versatile electronic displays as well new medical devices, sensors and other technologies.

KAKU ON TECHNOLOGY FOR CHANGE

Click for Conference Website

Click for Economist Impact

Technology for Change Asia
February 27th-28th, 2024
Grand Hyatt Hong Kong

This month, Dr. Michio Kaku, globally-renowned theoretical physicist, futurist, and bestselling author of QUANTUM SUPREMACY: How The Quantum Computer Revolution Will Change Everything, will be keynoting a major conference in Hong Kong — an international financial, economic, and technology hub.

Asia-Pacific has unique challenges but new technologies put resilient economic growth and a more sustainable future within reach. At the 4th annual Technology for Change Asia, attendees will explore how pioneers at the cusp of the technology curve are addressing digital and financial inequality, access to food and education, mitigating climate change, and shaping the future of AI.

Dr. Kaku will speak about the controversy surrounding AI, and also quantum computers. The event is presented by Economist Impact, empowering businesses, governments and foundations to catalyse change and enable progress. LEARN MORE

Excerpt from ‘THE FUTURE OF THE MIND’

Houdini believed that telepathy was impossible. But science is proving Houdini wrong. Telepathy is now the subject of intense research at universities around the world, where scientists have already been able to use advanced sensors to read individual words, images, and thoughts in a person’s brain. This could alter the way we communicate with stroke and accident victims who are “locked in” their bodies, unable to articulate their thoughts except through blinks. But that’s just the start. Telepathy might also radically change the way we interact with computers and the outside world.

Indeed, in a recent “Next 5 in 5 Forecast,” which predicts five revolutionary developments in the next five years, IBM scientists claimed that we will be able to mentally communicate with computers, perhaps replacing the mouse and voice commands. This means using the power of the mind to call people on the phone, pay credit card bills, drive cars, make appointments, create beautiful symphonies and works of art, etc. The possibilities are endless, and it seems that everyone— from computer giants, educators, video game companies, and music studios to the Pentagon— is converging on this technology.

True telepathy, found in science-fiction and fantasy novels, is not possible without outside assistance. As we know, the brain is electrical. In general, anytime an electron is accelerated, it gives off electromagnetic radiation. The same holds true for electrons oscillating inside the brain, which broadcasts radio waves. But these signals are too faint to be detected by others, and even if we could perceive these radio waves, it would be difficult to make sense of them. Evolution has not given us the ability to decipher this collection of random radio signals, but computers can. Scientists have been able to get crude approximations of a person’s thoughts using EEG scans. Subjects would put on a helmet with EEG sensors and concentrate on certain pictures— say, the image of a car. The EEG signals were then recorded for each image and eventually a rudimentary dictionary of thought was created, with a one- to- one correspondence between a person’s thoughts and the EEG image. Then, when a person was shown a picture of another car, the computer would recognize the EEG pattern as being from a car.

The advantage of EEG sensors is that they are noninvasive and quick. You simply put a helmet containing many electrodes onto the surface of the brain and the EEG can rapidly identify signals that change every millisecond. But the problem with EEG sensors, as we have seen, is that electromagnetic waves deteriorate as they pass through the skull, and it is difficult to locate their precise source. This method can tell if you are thinking of a car or a house, but it cannot re- create an image of the car.

That is where Dr. Jack Gallant’s work comes in…

VIDEOS OF THE MIND

The epicenter for much of this research is the University of California at Berkeley, where I received my own Ph.D. in theoretical physics years ago. I had the pleasure of touring the laboratory of Dr. Gallant, whose group has accomplished a feat once considered to be impossible: videotaping people’s thoughts. “This is a major leap forward reconstructing internal imagery. We are opening a window into the movies in our mind,” says Gallant.

When I visited his laboratory, the first thing I noticed was the team of young, eager postdoctoral and graduate students huddled in front of their computer screens, looking intently at video images that were reconstructed from someone’s brain scan. Talking to Gallant’s team, you feel as though you are witnessing scientific history in the making.

Gallant explained to me that first the subject lies flat on a stretcher, which is slowly inserted headfirst into a huge, state- of- the- art MRI machine, costing upward of $3 million. The subject is then shown several movie clips (such as movie trailers readily available on YouTube). To accumulate enough data, the subject has to sit motionless for hours watching these clips, a truly arduous task. I asked one of the postdocs, Dr. Shinji Nishimoto, how they found volunteers who were willing to lie still for hours on end with only fragments of video footage to occupy the time. He said the people in the room, the grad students and postdocs, volunteered to be guinea pigs for their own research.

As the subject watches the movies, the MRI machine creates a 3-D image of the blood flow within the brain. The MRI image looks like a vast collection of thirty thousand dots, or voxels. Each voxel represents a pinpoint of neural energy, and the color of the dot corresponds to the intensity of the signal and blood flow. Red dots represent points of large neural activity, while blue dots represent points of less activity. (The final image looks very much like thousands of Christmas lights in the shape of the brain. Immediately you can see that the brain is concentrating most of its mental energy in the visual cortex, which is located at the back of the brain, while watching these videos.)

Gallant’s MRI machine is so powerful it can identify two to three hundred distinct regions of the brain and, on average, can take snapshots that have one hundred dots per region of the brain. (One goal for future generations of MRI technology is to provide an even sharper resolution by increasing the number of dots per region of the brain.)

At first, this 3-D collection of colored dots looks like gibberish. But after years of research, Dr. Gallant and his colleagues have developed a mathematical formula that begins to find relationships between certain features of a picture (edges, textures, intensity, etc.) and the MRI voxels. For example, if you look at a boundary, you’ll notice it’s a region separating lighter and darker areas, and hence the edge generates a certain pattern of voxels. By having subject after subject view such a large library of movie clips, this mathematical formula is refined, allowing the computer to analyze how all sorts of images are converted into MRI voxels. Eventually the scientists were able to ascertain a direct correlation between certain MRI patterns of voxels and features within each picture.

At this point, the subject is then shown another movie trailer. The computer analyzes the voxels generated during this viewing and re- creates a rough approximation of the original image. (The computer selects images from one hundred movie clips that most closely resemble the one that the subject just saw and then merges images to create a close approximation.) In this way, the computer is able to create a fuzzy video of the visual imagery going through your mind. Dr. Gallant’s mathematical formula is so versatile that it can take a collection of MRI voxels and convert it into a picture, or it can do the reverse, taking a picture and then converting it to MRI voxels.

I had a chance to view the video created by Dr. Gallant’s group, and it was very impressive. Watching it was like viewing a movie with faces, animals, street scenes, and buildings through dark glasses. Although you could not see the details within each face or animal, you could clearly identify the kind of object you were seeing.

Not only can this program decode what you are looking at, it can also decode imaginary images circulating in your head. Let’s say you are asked to think of the Mona Lisa. We know from MRI scans that even though you’re not viewing the painting with your eyes, the visual cortex of your brain will light up. Dr. Gallant’s program then scans your brain while you are thinking of the Mona Lisa and flips through its data files of pictures, trying to find the closest match. In one experiment I saw, the computer selected a picture of the actress Salma Hayek as the closest approximation to the Mona Lisa. Of course, the average person can easily recognize hundreds of faces, but the fact that the computer analyzed an image within a person’s brain and then picked out this picture from millions of random pictures at its disposal is still impressive.

The goal of this whole process is to create an accurate dictionary that allows you to rapidly match an object in the real world with the MRI pattern in your brain. In general, a detailed match is very difficult and will take years, but some categories are actually easy to read just by flipping through some photographs. Dr. Stanislas Dehaene of the Collège de France in Paris was examining MRI scans of the parietal lobe, where numbers are recognized, when one of his postdocs casually mentioned that just by quickly scanning the MRI pattern, he could tell what number the subject was looking at. In fact, certain numbers created distinctive patterns on the MRI scan. He notes, “If you take 200 voxels in this area, and look at which of them are active and which are inactive, you can construct a machine-learning device that decodes which number is being held in memory.”

This leaves open the question of when we might be able to have picture quality videos of our thoughts. Unfortunately, information is lost when a person is visualizing an image. Brain scans corroborate this. When you compare the MRI scan of the brain as it is looking at a flower to an MRI scan as the brain is thinking about a flower, you immediately see that the second image has far fewer dots than the first.

So although this technology will vastly improve in the coming years, it will never be perfect. (I once read a short story in which a man meets a genie who offers to create anything that the person can imagine. The man immediately asks for a luxury car, a jet plane, and a million dollars. At first, the man is ecstatic. But when he looks at these items in detail, he sees that the car and the plane have no engines, and the image on the cash is all blurred. Everything is useless. This is because our memories are only approximations of the real thing.) But given the rapidity with which scientists are beginning to decode the MRI patterns in the brain, will we soon be able to actually read words and thoughts circulating in the mind?

READING THE MIND

In fact, in a building next to Gallant’s laboratory, Dr. Brian Pasley and his colleagues are literally reading thoughts— at least in principle. One of the postdocs there, Dr. Sara Szczepanski, explained to me how they are able to identify words inside the mind.

The scientists used what is called ECOG (electrocorticogram) technology, which is a vast improvement over the jumble of signals that EEG scans produce. ECOG scans are unprecedented in accuracy and resolution, since signals are directly recorded from the brain and do not pass through the skull. The flipside is that one has to remove a portion of the skull to place a mesh, containing sixty-four electrodes in an eight-by-eight grid, directly on top of the exposed brain.

Luckily they were able to get permission to conduct experiments with ECOG scans on epileptic patients, who were suffering from debilitating seizures. The ECOG mesh was placed on the patients’ brains while open- brain surgery was being performed by doctors at the nearby University of California at San Francisco.

As the patients hear various words, signals from their brains pass through the electrodes and are then recorded. Eventually a dictionary is formed, matching the word with the signals emanating from the electrodes in the brain. Later, when a word is uttered, one can see the same electrical pattern. This correspondence also means that if one is thinking of a certain word, the computer can pick up the characteristic signals and identify it. With this technology, it might be possible to have a conversation that takes place entirely telepathically. Also, stroke victims who are totally paralyzed may be able to “talk” through a voice synthesizer that recognizes the brain patterns of individual words.

Not surprisingly, BMI (brain-machine interface) has become a hot field, with groups around the country making significant breakthroughs. Similar results were obtained by scientists at the University of Utah in 2011. They placed grids, each containing sixteen electrodes, over the facial motor cortex (which controls movements of the mouth, lips, tongue, and face) and Wernicke’s area, which processes information about language. The person was then asked to say ten common words, such as “yes” and “no,” “hot” and “cold,” “hungry” and “thirsty,” “hello” and “good-bye,” and “more” and “less.” Using a computer to record the brain signals when these words were uttered, the scientists were able to create a rough one- to- one correspondence between spoken words and computer signals from the brain.

Later, when the patient voiced certain words, they were able to correctly identify each one with an accuracy ranging from 76 percent to 90 percent. The next step is to use grids with 121 electrodes to get better resolution. In the future, this procedure may prove useful for individuals suffering from strokes or paralyzing illnesses such as Lou Gehrig’s disease, who would be able to speak using the brain-to-computer technique.

TYPING WITH THE MIND

At the Mayo Clinic in Minnesota, Dr. Jerry Shih has hooked up epileptic patients via ECOG sensors so they can learn how to type with the mind. The calibration of this device is simple. The patient is first shown a series of letters and is told to focus mentally on each symbol. A computer records the signals emanating from the brain as it scans each letter. As with the other experiments, once this one- to- one dictionary is created, it is then a simple matter for the person to merely think of the letter and for the letter to be typed on a screen, using only the power of the mind.

Dr. Shih, the leader of this project, says that the accuracy of his machine is nearly 100 percent. Dr. Shih believes that he can next create a machine to record images, not just words, that patients conceive in their minds. This could have applications for artists and architects, but the big drawback of ECOG technology, as we have mentioned, is that it requires opening up patients’ brains.

Meanwhile, EEG typewriters, because they are noninvasive, are entering the marketplace. They are not as accurate or precise as ECOG typewriters, but they have the advantage that they can be sold over the counter. Guger Technologies, based in Austria, recently demonstrated an EEG typewriter at a trade show. According to their officials, it takes only ten minutes or so for people to learn how to use this machine, and they can then type at the rate of five to ten words per minute.

THE FUTURE OF THE MIND by Michio Kaku

Available in Paperback, Hardcover, Kindle, Audio CD, & Audible.

Buy Now at Amazon.com

For the complete library of books by Dr. Michio Kaku, click here.

KAKU BOOK: IT MAKES A GREAT GIFT!

Dr. Kaku signs copies of his latest book for a crowd of fans at a recent book fair.

Dr. Kaku’s latest bestselling book, THE FUTURE OF THE MIND, continues to be a blockbuster hit. Every book signing event has attracted enormous crowds of Kaku fans, a unique and growing group of wonderful people spanning all ages and walks of life. A wonder for science and an enthusiasm about the future unifies this amazing cross-section of the world that might not come together any other way. Dr. Kaku thanks you for showing up and making this launch such a resounding success.

More About THE FUTURE OF THE MINDAre YOU a Kaku fan? Help spread the word, expand the Kaku fan base online and off. Help to make THE FUTURE OF THE MIND Dr. Kaku’s most successful book ever. If you haven’t picked up your own copy, please BUY IT NOW. Better yet, please pick up some extra copies to give to your loved ones and work associates. Not only does it make a great gift for anyone and everyone — your generosity will make the world a better place.

The WSJ Weekend Interview with Michio Kaku — Captain Michio and the World of Tomorrow

The Wall Street Journal – The Weekend Interview (A version of this article appeared March 10, 2012, on page A11 in some U.S. editions of The Wall Street Journal, with the headline:

Captain Michio and the World of Tomorrow: Humans are born with the curiosity of scientists but switch to investment banking by Brian Bolduc (former Robert L. Bartley fellow at the Journal, is an editorial associate for National Review)

By 2020, the word “computer” will have vanished from the English language, physicist Michio Kaku predicts. Every 18 months, computer power doubles, he notes, so in eight years, a microchip will cost only a penny. Instead of one chip inside a desktop, we’ll have millions of chips in all our possessions: furniture, cars, appliances, clothes. Chips will become so ubiquitous that “we won’t say the word ‘computer,'” prophesies Mr. Kaku, a professor of theoretical physics at the City College of New York. “We’ll simply turn things on.”

Mr. Kaku, who is 65, enjoys making predictions. In his latest book, “Physics of the Future,” which Anchor released in paperback in February, he predicts driverless cars by 2020 and synthetic organs by 2030. If his forecasts sound strange, Mr. Kaku understands the skepticism. “If you could meet your grandkids as elderly citizens in the year 2100,” he offers, “you would view them as being, basically, Greek gods.” Nonetheless, he says, “that’s where we’re headed,” —and he worries that the U.S. will fall behind in this technological onrush.

To comprehend the world we’re entering, consider another word that will disappear soon: “tumor.” “We will have DNA chips inside our toilet, which will sample some of our blood and urine and tell us if we have cancer maybe 10 years before a tumor forms,” Mr. Kaku says. When you need to see a doctor, you’ll talk to a wall in your home, and “an animated, artificially intelligent doctor will appear.” You’ll scan your body with a hand-held MRI machine, the “Robodoc” will analyze the results, and you’ll receive “a diagnosis that is 99 percent accurate.”

— Continue Reading the Full Article on The Wall Street Journal (The Weekend Interview) where you can join in on the discussion —

Original Article (WSJ: The Weekend Interview) by Brian Bolduc, a former Robert L. Bartley fellow at the Journal, is an editorial associate for National ReviewOriginal Imagery by Ken Fallin

Last Ten Blog Posts from Dr. Kaku’s Universe (Big Think Blog)

For your convenience, here is a list of the last ten blog entries from Dr. Kaku’s blog (Dr. Kaku’s Universe) hosted at Big Think. Be sure to leave your questions in the comment sections below each blog entry as Dr. Kaku will be periodically answering questions from fans. Stay tuned for more updates!

Follow the Methane! New NASA Strategy for Mars?

by Dr. Michio Kaku

The recent discovery of methane on Mars is more than a curiousity. It could be a game changer.

For the last three decades, NASA’s Mars exploration program has been based on a single mantra: Follow the water. Where there is water, there might be life. So far, this strategy has come up empty handed. But now, NASA might have to change course and follow the methane. Methane gas, which heats up our food in our kitchen stoves, can be created by natural processes, but about 90% of the earth’s methane gas comes from living things, such as the decomposition of organic materials. So this is tantalizing evidence that perhaps some form of Martian life created this methane.

Back in 2003, the European Mars Express orbiter detected methane on Mars in the northern hemisphere. Careful analysis over several years with three ground-based telescopes then detected plumes of methane gas spewing from several specific sites on Mars, peaking in the summer time. Up to 20,000 metric tons of methane gas have been detected in these plumes. The burning question now being asked is: what is the origin of this methane gas?

3 billion years or so ago, Mars was tropical, with lakes, rivers, perhaps even an ocean as big as the United States. Back then, you could get a sun tan on prime beach front property. And perhaps microbial life in the form of algae and plankton thrived in this lush environment. But today, Mars is a frozen desert, a bleak, sterilized, and freezing landscape with a thin atmosphere of almost pure carbon dioxide. Perhaps this methane gas was left-over from the decay of organic life billions of years ago. A more interesting hypothesis is that this methane gas comes from present-day microbial life that grows underground, perhaps heated up by volcanic activity and hot springs.

(The earth also belches large quantities of methane gas, such as off the coast of Calif., because of methane deposits on the bottom of the ocean. Some have even speculated that these belches of gas might explain the mystery of the Bermuda Triangle. Colossal bubbles of gas seeping from the floor of the Caribbean may have suffocated sailors on ships or destabilized airplanes.)

At the very least, it means that NASA may re-think where to land the next series of Mars rovers. The next mission is the Mars Science Laboratory, to be launched in 2011. Before this announcement, NASA had considered (and passed over) the Nili Fossae area for a landing site, where methane plumes have been found. Scientists may reconsider this decision in light of this discovery. By digging into the soil, or by carefully analyzing the hydrogen isotopes within the methane, scientists may settle the question of the origin of methane gas.

If it turns out to be organic in nature, it could be the most profound achievement of the entire space program, rivaling sending a man to the moon.

In the long term, even if the methane gas has been found to be of inorganic origin, there are other possible uses for it. First, it might be used to create rocket fuel. In a manned mission to Mars, the astronauts may melt the ice in the ice caps or permafrost, separate out the oxygen and hydrogen from the water, and use them for rocket fuel. If methane exists in large quantities, it might be mixed with other volatile gases to make rocket fuel, thereby saving a considerable amount of money (since it may cost upwards of several hundred thousand dollars or more to put a pound of anything on the surface of Mars).

Second, in the far future, science fiction writers have speculated that it might be possible to create an artificial Greenhouse Effect on Mars by deliberately injecting methane into the atmosphere, since methane is much more potent than carbon dioxide as a greenhouse gas. In this way, one might be able to heat up the planet to melt some of the ice caps and permafrost so that liquid water may one day freely flow on the surface of Mars. Some have speculated that we might be able to create a new Garden of Eden on Mars. The goal would be for humanity to terraform Mars so that humanity can become a “two-planet species,” i.e. to create an insurance policy in case life is threatened on earth.

Having a spare planet could come in handy one day.
 

 

 

 
 

 

M-Theory: The Mother of all SuperStrings

An introduction to M-Theory

Every decade or so, a stunning breakthrough in string theory sends shock waves racing through the theoretical physics community, generating a feverish outpouring of papers and activity. This time, the Internet lines are burning up as papers keep pouring into the Los Alamos National Laboratory’s computer bulletin board, the official clearing house for superstring papers. John Schwarz of Caltech, for example, has been speaking to conferences around the world proclaiming the “second superstring revolution.” Edward Witten of the Institute for Advanced Study in Prince- ton gave a spell-binding 3 hour lecture describing it. The after- shocks of the breakthrough are even shaking other disciplines, like mathematics. The director of the Institute, mathematician Phillip Griffiths, says, “The excitement I sense in the people in the field and the spin-offs into my own field of mathematics … have really been quite extraordinary. I feel I’ve been very privileged to witness this first hand.”

Cumrun Vafa at Harvard has said, “I may be biased on this one, but I think it is perhaps the most important development not only in string theory, but also in theoretical physics at least in the past two decades.” What is triggering all this excitement is the discovery of something called “M-theory,” a theory which may explain the origin of strings. In one dazzling stroke, this new M-theory has solved a series of long-standing puzzling mysteries about string theory which have dogged it from the beginning, leaving many theoretical physicists (myself included!) gasping for breath. M-theory, moreover, may even force string theory to change its name. Although many features of M-theory are still unknown, it does not seem to be a theory purely of strings. Michael Duff of Texas A & M is already giving speeches with the title “The theory formerly known as strings!” String theorists are careful to point out that this does not prove the final correctness of the theory. Not by any means. That may make years or decades more. But it marks a most significant breakthrough that is already reshaping the entire field.

Parable of the Lion

Einstein once said, “Nature shows us only the tail of the lion. But I do not doubt that the lion belongs to it even though he cannot at once reveal himself because of his enormous size.” Einstein spent the last 30 years of his life searching for the “tail” that would lead him to the “lion,” the fabled unified field theory or the “theory of everything,” which would unite all the forces of the universe into a single equation. The four forces (gravity, electromagnetism, and the strong and weak nuclear forces) would be unified by an equation perhaps one inch long. Capturing the “lion” would be the greatest scientific achievement in all of physics, the crowning achievement of 2,000 years of scientific investigation, ever since the Greeks first asked themselves what the world was made of. But although Einstein was the first one to set off on this noble hunt and track the footprints left by the lion, he ultimately lost the trail and wandered off into the wilderness. Other giants of 20th century physics, like Werner Heisenberg and Wolfgang Pauli, also joined in the hunt. But all the easy ideas were tried and shown to be wrong. When Niels Bohr once heard a lecture by Pauli explaining his version of the unified field theory, Bohr stood up and said, “We in the back are all agreed that your theory is crazy. But what divides us is whether your theory is crazy enough!”

The trail leading to the unified field theory, in fact, is littered with the wreckage of failed expeditions and dreams. Today, however, physicists are following a different trail which might be “crazy enough” to lead to the lion. This new trail leads to superstring theory, which is the best (and in fact only) candidate for a theory of everything. Unlike its rivals, it has survived every blistering mathematical challenge ever hurled at it. Not surprisingly, the theory is a radical, “crazy” departure from the past, being based on tiny strings vibrating in 10 dimensional space-time. Moreover, the theory easily swallows up Einstein’s theory of gravity. Witten has said, “Unlike conventional quantum field theory, string theory requires gravity. I regard this fact as one of the greatest in- sights in science ever made.” But until recently, there has been a glaring weak spot: string theorists have been unable to probe all solutions of the model, failing miserably to examine what is called the “non-perturbative region,” which I will describe shortly. This is vitally important, since ultimately our universe (with its wonderfully diverse collection of galaxies, stars, planets, sub- atomic particles, and even people) may lie in this “non-perturbative region.” Until this region can be probed, we don’t know if string theory is a theory of everything — or a theory of nothing! That’s what today’s excitement is all about. For the first time, using a powerful tool called “duality,” physicists are now probing beyond just the tail, and finally seeing the outlines of a huge, unexpectedly beautiful lion at the other end. Not knowing what to call it, Witten has dubbed it “M-theory.” In one stroke, M-theory has solved many of the embarrassing features of the theory, such as why we have 5 superstring theories. Ultimately, it may solve the nagging question of where strings come from.

“Pea Brains” and the Mother of all Strings

Einstein once asked himself if God had any choice in making the universe. Perhaps not, so it was embarrassing for string theorists to have five different self-consistent strings, all of which can unite the two fundamental theories in physics, the theory of gravity and the quantum theory.

Each of these string theories looks completely different from the others. They are based on different symmetries, with exotic names like E(8)xE(8) and O(32).

Not only this, but superstrings are in some sense not unique: there are other non-string theories which contain “super- symmetry,” the key mathematical symmetry underlying superstrings. (Changing light into electrons and then into gravity is one of the rather astonishing tricks performed by supersymmetry, which is the symmetry which can exchange particles with half-integral spin, like electrons and quarks, with particles of integral spin, like photons, gravitons, and W-particles.

In 11 dimensions, in fact, there are alternate super theories based on membranes as well as point particles (called super- gravity). In lower dimensions, there is moreover a whole zoo of super theories based on membranes in different dimensions. (For example, point particles are 0-branes, strings are 1-branes, membranes are 2-branes, and so on.) For the p-dimensional case, some wag dubbed them p-branes (pronounced “pea brains”). But because p-branes are horribly difficult to work with, they were long considered just a historical curiosity, a trail that led to a dead-end. (Michael Duff, in fact, has collected a whole list of unflattering comments made by referees to his National Science Foundation grant concerning his work on p- branes. One of the more charitable comments from a referee was: “He has a skewed view of the relative importance of various concepts in modern theoretical physics.”) So that was the mystery. Why should supersymmetry allow for 5 superstrings and this peculiar, motley collection of p-branes? Now we realize that strings, supergravity, and p-branes are just different aspects of the same theory. M-theory (M for “membrane” or the “mother of all strings,” take your pick) unites the 5 superstrings into one theory and includes the p-branes as well. To see how this all fits together, let us update the famous parable of the blind wise men and the elephant. Think of the blind men on the trail of the lion. Hearing it race by, they chase after it and desperately grab onto its tail (a one-brane). Hanging onto the tail for dear life, they feel its one- dimensional form and loudly proclaim “It’s a string! It’s a string!”

But then one blind man goes beyond the tail and grabs onto the ear of the lion. Feeling a two-dimensional surface (a membrane), the blind man proclaims, “No, it’s really a two-brane!” Then another blind man is able to grab onto the leg of the lion. Sensing a three-dimensional solid, he shouts, “No, you’re both wrong. It’s really a three-brane!” Actually, they are all right. Just as the tail, ear, and leg are different parts of the same lion, the string and various p- branes appear to be different limits of the same theory: M- theory. Paul Townsend of Cambridge University, one of the architects of this idea, calls it “p-brane democracy,” i.e. all p- branes (including strings) are created equal. Schwarz puts a slightly different spin on this. He says, “we are in an Orwellian situation: all p-branes are equal, but some (namely strings) are more equal than others. The point is that they are the only ones on which we can base a perturbation theory.” To understand unfamiliar concepts such as duality, perturbation theory, non-perturbative solutions, it is instructive to see where these concepts first entered into physics.

Dualty

The key tool to understanding this breakthrough is something “duality.” Loosely speaking, two theories are “dual” to each other if they can be shown to be equivalent under a certain interchange. The simplest example of duality is reversing the role of electricity and magnetism in the equations discovered by James Clerk Maxwell of Cambridge University 130 years ago. These are the equations which govern light, TV, X-rays, radar, dynamos, motors, transformers, even the Internet and computers. The remarkable feature about these equations is that they remain the same if we interchange the magnetic B and electric fields E and also switch the electric charge e with the magnetic charge g of a magnetic “monopole”: E <–> B and e <–> g (In fact, the product eg is a constant.) This has important implications. Often, when a theory cannot be solved exactly, we use an approximation scheme. In first year calculus, for example, we recall that we can approximate certain functions by Taylor’s expansion. Similarly, since e^2 = 1/137 in certain units and is hence a small number, we can always approximate the theory by power expanding in e^2. So we add contributions of order e^2 + e^4 + e^6 etc. in solving for, say, the collision of two particles. Notice that each contribution is getting smaller and smaller, so we can in principle add them all up. This generalization of Taylor’s expansion is called “perturbation theory,” where we perturb the system with terms containing e^2. For example, in archery, perturbation theory is how we aim our arrows. With every motion of our arms, our bow gets closer and closer to aligning with the bull’s eye.) But now try expanding in g^2. This is much tougher; in fact, if we expand in g^2, which is large, then the sum g^2 + g^4 + g^6 etc. blows up and becomes meaningless. This is the reason why the “non-perturbative” region is so difficult to probe, since the theory simply blows up if we try to naively use perturbation theory for large coupling constant g. So at first it appears hopeless that we could ever penetrate into the non-perturbative region. (For example, if every motion of our arms got bigger and bigger, we would never be able to zero in and hit the target with the arrow.) But notice that because of duality, a theory of small e (which is easily solved) is identical to a theory of large g (which is difficult to solve). But since they are the same theory, we can use duality to solve for the non-perturbative region.

S, T, and U Dualty

The first inkling that duality might apply in string theory was discovered by K. Kikkawa and M. Yamasaki of Osaka Univ. in 1984. They showed that if you “curled up” one of the extra dimensions into a circle with radius R, the theory was the same if we curled up this dimension with radius 1/R. This is now called T- duality: R <–> 1/R When applied to various superstrings, one could reduce 5 of the string theories down to 3 (see figure). In 9 dimensions (with one dimension curled up) the Type IIa and IIb strings were identical, as were the E(8)xE(8) and O(32) strings.

Unfortunately, T duality was still a perturbative duality. The next breakthrough came when it was shown that there was a second class of dualities, called S duality, which provided a duality between the perturbative and non-perturbative regions of string theory. Another duality, called U duality, was even more powerful.

Then Nathan Seiberg and Witten brilliantly showed how another form of duality could solve for the non-perturbative region in four dimensional supersymmetric theories. However, what finally convinced many physicists of the power of this technique was the work of Paul Townsend and Edward Wit- ten. They caught everyone by surprise by showing that there was a duality between 10 dimensional Type IIa strings and 11 dimension- al supergravity! The non-perturbative region of Type IIa strings, which was previously a forbidden region, was revealed to be governed by 11 dimensional supergravity theory, with one dimension curled up. At this point, I remember that many physicists (myself included) were rubbing our eyes, not believing what we were seeing. I remember saying to myself, “But that’s impossible!”

All of a sudden, we realized that perhaps the real “home” of string theory was not 10 dimensions, but possibly 11, and that the theory wasn’t fundamentally a string theory at all! This revived tremendous interest in 11 dimensional theories and p- branes. Lurking in the 11th dimension was an entirely new theory which could reduce down to 11 dimensional supergravity as well as 10 dimensional string theory and p-brane theory.

Detractors of String Theories

To the critics, however, these mathematical developments still don’t answer the nagging question: how do you test it? Since string theory is really a theory of Creation, when all its beautiful symmetries were in their full glory, the only way to test it, the critics wail, is to re-create the Big Bang itself, which is impossible. Nobel Laureate Sheldon Glashow likes to ridicule superstring theory by comparing it with former Pres. Reagan’s Star Wars plan, i.e. they are both untestable, soak up resources, and both siphon off the best scientific brains.

Actually, most string theorists think these criticisms are silly. They believe that the critics have missed the point. The key point is this: if the theory can be solved non- perturbatively using pure mathematics, then it should reduce down at low energies to a theory of ordinary protons, electrons, atoms, and molecules, for which there is ample experimental data. If we could completely solve the theory, we should be able to extract its low energy spectrum, which should match the familiar particles we see today in the Standard Model. Thus, the problem is not building atom smashers l,000 light years in diameter; the real problem is raw brain power: of only we were clever enough, we could write down M-theory, solve it, and settle everything.

Evolving Backwards

So what would it take to actually solve the theory once and for all and end all the speculation and back-biting? There are several approaches. The first is the most direct: try to derive the Standard Model of particle interactions, with its bizarre collection of quarks, gluons, electrons, neutrinos, Higgs bosons, etc. etc. etc. (I must admit that although the Standard Model is the most successful physical theory ever proposed, it is also one of the ugliest.) This might be done by curling up 6 of the 10 dimensions, leaving us with a 4 dimensional theory that might resemble the Standard Model a bit. Then try to use duality and M- theory to probe its non-perturbative region, seeing if the symmetries break in the correct fashion, giving us the correct masses of the quarks and other particles in the Standard Model. Witten’s philosophy, however, is a bit different. He feels that the key to solving string theory is to understand the under- lying principle behind the theory.

Let me explain. Einstein’s theory of general relativity, for example, started from first principles. Einstein had the “happiest thought in his life” when he leaned back in his chair at the Bern patent office and realized that a person in a falling elevator would feel no gravity. Although physicists since Galileo knew this, Einstein was able to extract from this the Equivalence Principle. This deceptively simple statement (that the laws of physics are indistinguishable locally in an accelerating or a gravitating frame) led Einstein to introduce a new symmetry to physics, general co-ordinate transformations. This in turn gave birth to the action principle behind general relativity, the most beautiful and compelling theory of gravity. Only now are we trying to quantize the theory to make it compatible with the other forces. So the evolution of this theory can be summarized as: Principle -> Symmetry -> Action -> Quantum Theory According to Witten, we need to discover the analog of the Equivalence Principle for string theory. The fundamental problem has been that string theory has been evolving “backwards.” As Witten says, “string theory is 21st century physics which fell into the 20th century by accident.” We were never “meant” to see this theory until the next century.

Is the End in Sight?

Vafa recently added a strange twist to this when he introduced yet another mega-theory, this time a 12 dimensional theory called F-theory (F for “father”) which explains the self-duality of the IIb string. (Unfortunately, this 12 dimensional theory is rather strange: it has two time co-ordinates, not one, and actually violates 12 dimensional relativity. Imagine trying to live in a world with two times! It would put an episode of Twilight Zone to shame.) So is the final theory 10, 11, or 12 dimensional?

Schwarz, for one, feels that the final version of M-theory may not even have any fixed dimension. He feels that the true theory may be independent of any dimensionality of space-time, and that 11 dimensions only emerges once one tries to solve it. Townsend seems to agree, saying “the whole notion of dimensionality is an approximate one that only emerges in some semiclassical context.” So does this means that the end is in sight, that we will someday soon derive the Standard Model from first principles? I asked some of the leaders in this field to respond to this question. Although they are all enthusiastic supporters of this revolution, they are still cautious about predicting the future. Townsend believes that we are in a stage similar to the old quantum era of the Bohr atom, just before the full elucidation of quantum mechanics. He says, “We have some fruitful pictures and some rules analogous to the Bohr-Sommerfeld quantization rules, but it’s also clear that we don’t have a complete theory.”

Duff says, “Is M-theory merely a theory of supermembranes and super 5-branes requiring some (as yet unknown) non- perturbative quantization, or (as Witten believes) are the under- lying degrees of freedom of M-theory yet to be discovered? I am personally agnostic on this point.” Witten certainly believes we are on the right track, but we need a few more “revolutions” like this to finally solve the theory. “I think there are still a couple more superstring revolutions in our future, at least. If we can manage one more superstring revolution a decade, I think that we will do all right,” he says. Vafa says, “I hope this is the ‘light at the end of the tunnel’ but who knows how long the tunnel is!” Schwarz, moreover, has written about M-theory: “Whether it is based on something geometrical (like supermembranes) or some- thing completely different is still not known. In any case, finding it would be a landmark in human intellectual history.” Personally, I am optimistic. For the first time, we can see the outline of the lion, and it is magnificent. One day, we will hear it roar.

For the complete library of books by Dr. Michio Kaku, click here.

The Physics of Interstellar Travel

To one day, reach the stars.

When discussing the possibility of interstellar travel, there is something called “the giggle factor.” Some scientists tend to scoff at the idea of interstellar travel because of the enormous distances that separate the stars. According to Special Relativity (1905), no usable information can travel faster than light locally, and hence it would take centuries to millennia for an extra-terrestrial civilization to travel between the stars. Even the familiar stars we see at night are about 50 to 100 light years from us, and our galaxy is 100,000 light years across. The nearest galaxy is 2 million light years from us. The critics say that the universe is simply too big for interstellar travel to be practical.

Similarly, investigations into UFO’s that may originate from another planet are sometimes the “third rail” of someone’s scientific career. There is no funding for anyone seriously looking at unidentified objects in space, and one’s reputation may suffer if one pursues an interest in these unorthodox matters. In addition, perhaps 99% of all sightings of UFO’s can be dismissed as being caused by familiar phenomena, such as the planet Venus, swamp gas (which can glow in the dark under certain conditions), meteors, satellites, weather balloons, even radar echoes that bounce off mountains. (What is disturbing, to a physicist however, is the remaining 1% of these sightings, which are multiple sightings made by multiple methods of observations. Some of the most intriguing sightings have been made by seasoned pilots and passengers aboard air line flights which have also been tracked by radar and have been videotaped. Sightings like this are harder to dismiss.)

But to an astronomer, the existence of intelligent life in the universe is a compelling idea by itself, in which extra-terrestrial beings may exist on other stars who are centuries to millennia more advanced than ours. Within the Milky Way galaxy alone, there are over 100 billion stars, and there are an uncountable number of galaxies in the universe. About half of the stars we see in the heavens are double stars, probably making them unsuitable for intelligent life, but the remaining half probably have solar systems somewhat similar to ours. Although none of the over 100 extra-solar planets so far discovered in deep space resemble ours, it is inevitable, many scientists believe, that one day we will discover small, earth-like planets which have liquid water (the “universal solvent” which made possible the first DNA perhaps 3.5 billion years ago in the oceans). The discovery of earth-like planets may take place within 20 years, when NASA intends to launch the space interferometry satellite into orbit which may be sensitive enough to detect small planets orbiting other stars.

So far, we see no hard evidence of signals from extra-terrestrial civilizations from any earth-like planet. The SETI project (the search for extra-terrestrial intelligence) has yet to produce any reproducible evidence of intelligent life in the universe from such earth-like planets, but the matter still deserves serious scientific analysis. The key is to reanalyze the objection to faster-than-light travel.

A critical look at this issue must necessary embrace two new observations. First, Special Relativity itself was superceded by Einstein’s own more powerful General Relativity (1915), in which faster than light travel is possible under certain rare conditions. The principal difficulty is amassing enough energy of a certain type to break the light barrier. Second, one must therefore analyze extra-terrestrial civilizations on the basis of their total energy output and the laws of thermodynamics. In this respect, one must analyze civilizations which are perhaps thousands to millions of years ahead of ours.

The first realistic attempt to analyze extra-terrestrial civilizations from the point of view of the laws of physics and the laws of thermodynamics was by Russian astrophysicist Nicolai Kardashev. He based his ranking of possible civilizations on the basis of total energy output which could be quantified and used as a guide to explore the dynamics of advanced civilizations:

Type I: this civilization harnesses the energy output of an entire planet.

Type II: this civilization harnesses the energy output of a star, and generates about 10 billion times the energy output of a Type I civilization.

Type III: this civilization harnesses the energy output of a galaxy, or about 10 billion time the energy output of a Type II civilization.

A Type I civilization would be able to manipulate truly planetary energies. They might, for example, control or modify their weather. They would have the power to manipulate planetary phenomena, such as hurricanes, which can release the energy of hundreds of hydrogen bombs. Perhaps volcanoes or even earthquakes may be altered by such a civilization.

A Type II civilization may resemble the Federation of Planets seen on the TV program Star Trek (which is capable of igniting stars and has colonized a tiny fraction of the near-by stars in the galaxy). A Type II civilization might be able to manipulate the power of solar flares.

A Type III civilization may resemble the Borg, or perhaps the Empire found in the Star Wars saga. They have colonized the galaxy itself, extracting energy from hundreds of billions of stars.

By contrast, we are a Type 0 civilization, which extracts its energy from dead plants (oil and coal). Growing at the average rate of about 3% per year, however, one may calculate that our own civilization may attain Type I status in about 100-200 years, Type II status in a few thousand years, and Type III status in about 100,000 to a million years. These time scales are insignificant when compared with the universe itself.

On this scale, one may now rank the different propulsion systems available to different types of civilizations:

Type 0

  • Chemical rockets
  • Ionic engines
  • Fission power
  • EM propulsion (rail guns)

 

Type I

  • Ram-jet fusion engines
  • Photonic drive

 

Type II

  • Antimatter drive
  • Von Neumann nano probes

 

Type III

  • Planck energy propulsion

 

Propulsion systems may be ranked by two quantities: their specific impulse, and final velocity of travel. Specific impulse equals thrust multiplied by the time over which the thrust acts. At present, almost all our rockets are based on chemical reactions. We see that chemical rockets have the smallest specific impulse, since they only operate for a few minutes. Their thrust may be measured in millions of pounds, but they operate for such a small duration that their specific impulse is quite small.

NASA is experimenting today with ion engines, which have a much larger specific impulse, since they can operate for months, but have an extremely low thrust. For example, an ion engine which ejects cesium ions may have the thrust of a few ounces, but in deep space they may reach great velocities over a period of time since they can operate continuously. They make up in time what they lose in thrust. Eventually, long-haul missions between planets may be conducted by ion engines.

For a Type I civilization, one can envision newer types of technologies emerging. Ram-jet fusion engines have an even larger specific impulse, operating for years by consuming the free hydrogen found in deep space. However, it may take decades before fusion power is harnessed commercially on earth, and the proton-proton fusion process of a ram-jet fusion engine may take even more time to develop, perhaps a century or more. Laser or photonic engines, because they might be propelled by laser beams inflating a gigantic sail, may have even larger specific impulses. One can envision huge laser batteries placed on the moon which generate large laser beams which then push a laser sail in outer space. This technology, which depends on operating large bases on the moon, is probably many centuries away.

For a Type II civilization, a new form of propulsion is possible: anti-matter drive. Matter-anti-matter collisions provide a 100% efficient way in which to extract energy from mater. However, anti-matter is an exotic form of matter which is extremely expensive to produce. The atom smasher at CERN, outside Geneva, is barely able to make tiny samples of anti-hydrogen gas (anti-electrons circling around anti-protons). It may take many centuries to millennia to bring down the cost so that it can be used for space flight.

Given the astronomical number of possible planets in the galaxy, a Type II civilization may try a more realistic approach than conventional rockets and use nano technology to build tiny, self-replicating robot probes which can proliferate through the galaxy in much the same way that a microscopic virus can self-replicate and colonize a human body within a week. Such a civilization might send tiny robot von Neumann probes to distant moons, where they will create large factories to reproduce millions of copies of themselves. Such a von Neumann probe need only be the size of bread-box, using sophisticated nano technology to make atomic-sized circuitry and computers. Then these copies take off to land on other distant moons and start the process all over again. Such probes may then wait on distant moons, waiting for a primitive Type 0 civilization to mature into a Type I civilization, which would then be interesting to them. (There is the small but distinct possibility that one such probe landed on our own moon billions of years ago by a passing space-faring civilization. This, in fact, is the basis of the movie 2001, perhaps the most realistic portrayal of contact with extra-terrrestrial intelligence.)

The problem, as one can see, is that none of these engines can exceed the speed of light. Hence, Type 0,I, and II civilizations probably can send probes or colonies only to within a few hundred light years of their home planet. Even with von Neumann probes, the best that a Type II civilization can achieve is to create a large sphere of billions of self-replicating probes expanding just below the speed of light. To break the light barrier, one must utilize General Relativity and the quantum theory. This requires energies which are available for very advanced Type II civilization or, more likely, a Type III civilization.

Special Relativity states that no usable information can travel locally faster than light. One may go faster than light, therefore, if one uses the possibility of globally warping space and time, i.e. General Relativity. In other words, in such a rocket, a passenger who is watching the motion of passing stars would say he is going slower than light. But once the rocket arrives at its destination and clocks are compared, it appears as if the rocket went faster than light because it warped space and time globally, either by taking a shortcut, or by stretching and contracting space.

There are at least two ways in which General Relativity may yield faster than light travel. The first is via wormholes, or multiply connected Riemann surfaces, which may give us a shortcut across space and time. One possible geometry for such a wormhole is to assemble stellar amounts of energy in a spinning ring (creating a Kerr black hole). Centrifugal force prevents the spinning ring from collapsing. Anyone passing through the ring would not be ripped apart, but would wind up on an entirely different part of the universe. This resembles the Looking Glass of Alice, with the rim of the Looking Glass being the black hole, and the mirror being the wormhole. Another method might be to tease apart a wormhole from the “quantum foam” which physicists believe makes up the fabric of space and time at the Planck length (10 to the minus 33 centimeters).

The problems with wormholes are many:

a) one version requires enormous amounts of positive energy, e.g. a black hole. Positive energy wormholes have an event horizon(s) and hence only give us a one way trip. One would need two black holes (one for the original trip, and one for the return trip) to make interstellar travel practical. Most likely only a Type III civilization would be able harness this power.

b) wormholes may be unstable, both classically or quantum mechanically. They may close up as soon as you try to enter them. Or radiation effects may soar as you entered them, killing you.

c) one version requires vast amounts of negative energy. Negative energy does exist (in the form of the Casimir effect) but huge quantities of negative energy will be beyond our technology, perhaps for millennia. The advantage of negative energy wormholes is that they do not have event horizons and hence are more easily transversable.

d) another version requires large amounts of negative matter. Unfortunately, negative matter has never been seen in nature (it would fall up, rather than down). Any negative matter on the earth would have fallen up billions of years ago, making the earth devoid of any negative matter.

The second possibility is to use large amounts of energy to continuously stretch space and time (i.e. contracting the space in front of you, and expanding the space behind you). Since only empty space is contracting or expanding, one may exceed the speed of light in this fashion. (Empty space can warp space faster than light. For example, the Big Bang expanded much faster than the speed of light.) The problem with this approach, again, is that vast amounts of energy are required, making it feasible for only a Type III civilization. Energy scales for all these proposals are on the order of the Planck energy (10 to the 19 billion electron volts, which is a quadrillion times larger than our most powerful atom smasher).

Lastly, there is the fundamental physics problem of whether “topology change” is possible within General Relativity (which would also make possible time machines, or closed time-like curves). General Relativity allows for closed time-like curves and wormholes (often called Einstein-Rosen bridges), but it unfortunately breaks down at the large energies found at the center of black holes or the instant of Creation. For these extreme energy domains, quantum effects will dominate over classical gravitational effects, and one must go to a “unified field theory” of quantum gravity.

At present, the most promising (and only) candidate for a “theory of everything”, including quantum gravity, is superstring theory or M-theory. It is the only theory in which quantum forces may be combined with gravity to yield finite results. No other theory can make this claim. With only mild assumptions, one may show that the theory allows for quarks arranged in much like the configuration found in the current Standard Model of sub-atomic physics. Because the theory is defined in 10 or 11 dimensional hyperspace, it introduces a new cosmological picture: that our universe is a bubble or membrane floating in a much larger multiverse or megaverse of bubble-universes.

Unfortunately, although black hole solutions have been found in string theory, the theory is not yet developed to answer basic questions about wormholes and their stability. Within the next few years or perhaps within a decade, many physicists believe that string theory will mature to the point where it can answer these fundamental questions about space and time. The problem is well-defined. Unfortunately, even though the leading scientists on the planet are working on the theory, no one on earth is smart enough to solve the superstring equations.

Conclusion

Most scientists doubt interstellar travel because the light barrier is so difficult to break. However, to go faster than light, one must go beyond Special Relativity to General Relativity and the quantum theory. Therefore, one cannot rule out interstellar travel if an advanced civilization can attain enough energy to destabilize space and time. Perhaps only a Type III civilization can harness the Planck energy, the energy at which space and time become unstable. Various proposals have been given to exceed the light barrier (including wormholes and stretched or warped space) but all of them require energies found only in Type III galactic civilizations. On a mathematical level, ultimately, we must wait for a fully quantum mechanical theory of gravity (such as superstring theory) to answer these fundamental questions, such as whether wormholes can be created and whether they are stable enough to allow for interstellar travel.

Teach your brain to stretch time

Over the past few years, neuroscientists have started probing the brain’s timing mechanisms using measurements of electrical activity and imaging techniques such as fMRI. Read More

Flexible Electronics Melded With Contact Lens Creates Bionic Eye

Engineers at the University of Washington (UW) have for the first time used microscopic manufacturing techniques to combine a flexible, biologically safe contact lens with an imprinted electronic circuit and lights.

X