Book Full Title: Science Set Free: 10 Paths to New Discovery.

Book page on Goodreads: Science Set Free: 10 Paths to New Discovery.

Book in a Paragraph

The book criticizes materialism and provides evidences that contradict materialism principles. Materialism is not based on science; it is a belief that achieved its dominance within institutional science in the second half of the nineteenth century. Many scientists use multiverse theories like superstring theory which are untestable theories to explain why the conditions and constants at the time of the Big Bang were suitable (fine tuned) for biological life to be possible. If they were slightly different, biological life could never have emerged. They use these theories to avoid believing in a creator God. The book talks, with supporting evidence, about telepathy, hypnosis, inedia phenomenon (not eating nor drinking for years), and other experiences that challenge the materialistic view of the world. The book discusses, with examples, how scientists are normal humans who might be corrupted and do unethical deeds. The book shows how modern science is so focused on material matter; it downplays and don’t give a way to other sciences that might prove to be of great benefits like alternative medicine. The book talks about the vast gulf between the rhetoric about the powers of genes and what they actually do and the disappointment that occurred after the results of the Human Genome Project had appeared.

Impressions

It was a little annoying that the author mentions “morphic resonance” a lot as an explanation of some phenomena although it seems to not have a strong proof. He refers us to another book of him where he explains more about it.

Who Should Read it?

People idolizing modern scientists and thinking that “modern science” is the only way to know. People who use science as a religion.

Notes

The Cosmological Anthropic Principle asserts that if the laws and constants of nature had been slightly different at the moment of the Big Bang, biological life could never have emerged, and hence we would not be here to think about it. So did a divine mind fine-tune the laws and constants in the beginning? To avoid a creator God emerging in a new guise, most leading cosmologists prefer to believe that our universe is one of a vast, and perhaps infinite, number of parallel universes, all with different laws and constants, as M-theory also suggests. We just happen to exist in the one that has the right conditions for us. This multiverse theory is the ultimate violation of Occam’s Razor, the philosophical principle that “entities must not be multiplied beyond necessity,” or in other words, that we should make as few assumptions as possible. It also has the major disadvantage of being untestable. And it does not even succeed in getting rid of God. An infinite God could be the God of an infinite number of universes.

According to the Anthropic Cosmological Principle, the fact that the “laws” and “constants” of nature are just right for human life on this planet requires an explanation. If these laws and constants had been even slightly different, carbon-based life would not exist. One response is to suggest that an Intelligent Designer fine-tuned the laws and constants of nature at the moment of the Big Bang so they were exactly right for the emergence of life and human beings. This is a modern version of deism. But an appeal to a divine mind, albeit of a remote, mathematical kind, is contrary to the atheistic spirit of much modern science. Instead many cosmologists prefer to think that there are innumerable actually existing universes besides our own, each with different laws and constants. In these “multiverse” models the fact that we occupy a universe that is just right for us is explained very simply. This is the only universe that we can actually observe precisely because it is the only one right for us. The other theoretical reason for the popularity of the multiverse is superstring theory. This ten-dimensional theory and the related eleven-dimensional M-theory generate far too many possible solutions, which could correspond to different universes, as many as 10500.

Multiverse theories assume that particular laws and constants are built into each separate universe at the moment of its origin or Big Bang. They are somehow “imprinted” in each universe. But how are they remembered? How does an individual universe “know” what laws and constants are governing it, as opposed to the different laws and constants of other universes? As the cosmologist Martin Rees expressed it, “The physical laws were themselves ‘laid down’ in the Big Bang.” But, he admitted, “The mechanisms that might ‘imprint’ the basic laws and constants in a new universe are obviously far beyond anything we understand.” Some physicists and cosmologists are unhappy with these speculations. A vast number of unobserved universes violates the canon of scientific testability. Multiverse supporters claim that mathematics itself, in the form of string and M-theories, provides evidence in favor of their speculations. But string and M-theories themselves, on which many of these speculations are based, are untestable. One critic, Peter Woit, called his book on the subject Not Even Wrong. Even generic predictions that superstring theory shares with other theories, such as supersymmetry, have not fared well.

The founders of mechanistic science in the seventeenth century, including Johannes Kepler, Galileo Galilei, René Descartes, Francis Bacon, Robert Boyle and Isaac Newton, were all practicing Christians. Newton calculated that the Day of Judgment would occur between the years 2060 and 2344.

Materialists recognized only one reality: the material world. The spiritual realm did not exist. Gods, angels and spirits were figments of the human imagination, and human minds were nothing but aspects or by-products of brain activity. The materialist philosophy achieved its dominance within institutional science in the second half of the nineteenth century, and was closely linked to the rise of atheism in Europe. Twenty-first-century atheists, like their predecessors, take the doctrines of materialism to be established scientific facts, not just assumptions. When it was combined with the idea that the entire universe was like a machine running out of steam, according to the second law of thermodynamics, materialism led to the cheerless worldview expressed by the philosopher Bertrand Russell: “That man is the product of causes which had no prevision of the end they were achieving; that his origin, his growth, his hopes and fears, his loves and beliefs, are but the outcome of accidental collisions of atoms; that no fire, no heroism, no intensity of thought and feeling, can preserve an individual life beyond the grave; that all the labors of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system; and that the whole temple of Man’s achievement must inevitably be buried beneath the debris of a universe in ruins—all these things, if not quite beyond dispute, are yet so nearly certain, that no philosophy which rejects them can hope to stand. Only within the scaffolding of these truths, only on the firm foundation of unyielding despair, can the soul’s habitation henceforth be built.”

Big Bang theory: It was not until 1927 that Georges Lemaître, a cosmologist and Roman Catholic priest, advanced a scientific hypothesis like Hume’s idea of the origin of the universe in an egg or seed. Lemaître suggested that the universe began with a “creation-like event,” which he described as “the cosmic egg exploding at the moment of creation.” Later called the Big Bang, this new cosmology echoed many archaic stories of origins, like the Orphic creation myth of the Cosmic Egg in ancient Greece, or the Indian myth of Hiranyagarbha, the primal Golden Egg. Lemaître’s theory predicted the expansion of the universe, and was supported by the discovery that galaxies outside our own are moving away from us with a speed proportional to their distance. In 1964, the discovery of a faint background glow everywhere in the universe, the cosmic microwave background radiation, revealed what seemed to be fossil light left over from the early universe, soon after the Big Bang. The evidence for an initial “creation-like event” became overwhelming, and by 1966 the Big Bang theory became orthodox.

According to the theory of quantum electrodynamics, brilliantly expounded by the physicist Richard Feynman, virtual particles, such as electrons and photons, appear and disappear from the quantum vacuum field, also known as the zero-point field, that pervades the universe. Feynman called this theory the “jewel of physics” because of its extremely accurate predictions, correct to many decimal places. The price that is paid for this accuracy is the acceptance of invisible, unobservable particles and interactions, and of the mysterious quantum vacuum field. According to quantum electrodynamics, all electrical and magnetic forces are mediated by virtual photons that appear from the quantum vacuum field and then disappear into it again. When you look at a compass to find out where north is, the compass needle interacts with the earth’s magnetic field through virtual photons. When you switch on a fan, its electric motor makes it go round because it is suddenly filled with virtual photons that exert forces.

We have come a long way from a simple belief in atoms of matter as tiny solid objects that persist unchanged through time. According to current theories, matter itself is an energetic process, and mass depends on interactions with fields that pervade the vacuum. Mass, the quantitative measure of matter, turns out to be deeply mysterious. According to the Standard Model of particle physics, the mass of a particle like an electron or a proton is not inherent in the particle itself but depends on its interaction with a field called a Higgs field. The mass of an electron, for example, arises through its interaction with the Higgs field, and this interaction depends on special Higgs particles, called Higgs bosons. And no Higgs boson has so far been detected, despite the expenditure of billions of euros to look for them.

Dark matter helps to explain the structures of galaxies and the relations between galaxies within clusters, but it does so at a heavy price: nobody knows what it is. Detailed observations of distant supernovas—exploding stars in faraway galaxies—showed that the expansion of the universe was speeding up. Gravitation ought to be slowing it down. So something else must account for accelerating growth. Physicists were forced to conclude that there must be an antigravity force, called dark energy.

Inedia phenomenon: people who seem to be able to live for months or years without eating. This phenomenon is known as inedia (Latin for “fasting”). Many people who did this long fasting were monitored to verify that they were not eating or drinking during their fasting.

The Australian astronomer John Webb studied the fine structure constant, α, to see if it was always constant or not. He found that it wasn’t. As Webb and his colleague John Barrow pointed out, “If α is susceptible to change, other constants should vary as well, making the inner workings of nature more fickle than scientists ever suspected.

The neuroscientist Benjamin Libet and his team in San Francisco measured changes in the brain and the timing of conscious experiences. First, Libet’s group stimulated their human subjects either by flashes of light or by a rapid sequence of mild electric pulses applied to the back of the hand. If the stimulus was short, less than about half a second (500 milliseconds), the subjects were unconscious of it, even though the sensory cortex of their brains responded. But if the stimulus went on for more than 500 milliseconds, the subjects became consciously aware of it. So far, so good. The need for a minimum duration of stimulus is not in itself surprising. What is surprising is that the subjects’ conscious awareness of the stimulus began not after 500 milliseconds but when the stimulus started. In other words, it took half a second for the stimulus to be experienced subjectively, but this subjective experience moved backward to when the stimulus was first applied. Second, Libet investigated what happened when people made free conscious choices. The subjects sat quietly, and were asked to flex one of their fingers or push a button whenever they felt like doing so. They also noted when they decided or felt the wish to do so. This conscious decision occurred about 200 milliseconds before the finger movement. This seems straightforward—the choice preceded the action. What was remarkable was that electrical changes began in the brain about 300 milliseconds before any conscious decision was made. For some neuroscientists and philosophers, Libet’s finding seemed like the ultimate experimental proof that free will is an illusion. The brain changed first, and about a third of a second later, conscious awareness followed the choice, rather than initiating it.

The materialist denial of purposes in evolution is not based on evidence but is an assumption. Materialists are forced to attribute evolutionary creativity to chance on ideological grounds.

The usual assumption is that genes somehow control or “program” the development of form, as if the nucleus, containing the genes, is a kind of brain controlling the cell. But Acetabularia shows that morphogenesis can take place without genes. If the rhizoid containing the nucleus is cut off, the alga can stay alive for months, and if the cap is cut off, it can regenerate a new one. Even more remarkable, if a piece is cut out of the stem, after the cuts have healed, a new tip grows from the end where the cap used to be and makes a new cap. The image below illustrates these scenarios.

Mechanists expel purposive vital factors from living animals and plants, but then they reinvent them in molecular guises. One form of molecular vitalism is to treat the genes as purposive entities with goals and powers that go far beyond those of a mere chemical like DNA. The genes become molecular entelechies. In his book The Selfish Gene, Richard Dawkins endowed them with life and intelligence. In Dawkins’s words, “DNA moves in mysterious ways.” The DNA molecules are not only intelligent, they are also selfish, ruthless and competitive. If challenged, most biologists will admit that genes merely specify the sequence of amino acids in proteins, or are involved in the control of protein synthesis. They are not really programs; they are not selfish, they do not mold matter, or shape form, or aspire to immortality.

There is a vast gulf between rhetoric about the powers of genes and what they actually do. Investors in biotechnology are swept along by the metaphors, as are readers of popular science. The problem goes right back to Weismann, who made the determinants an active agency, controlling and directing the organism’s development. In effect, he endowed a special kind of matter, the germ-plasm, with the properties of the soul. Genetic programs and selfish genes are similarly endowed with vital powers, including the ability to “mould matter” and “create form.” Thanks to the discoveries of molecular biology, we know what genes actually do. They code for the sequences of amino acids that are strung together in polypeptide chains, which then fold up into protein molecules. Also, some genes are involved in the control of protein synthesis. DNA molecules are molecules. They are not “determinants” of particular structures. Richard Dawkins has probably done more than any other author to popularize genes. Unfortunately, his vivid metaphors are highly misleading.

If genetic programs were carried in the genes, then all the cells of the body would be programmed identically, because in general they contain exactly the same genes. The cells of your arms and legs, for example, are genetically identical. Your limbs contain exactly the same kinds of protein molecules, as well as chemically identical bone, cartilage and nerves. Yet arms and legs have different shapes. Clearly, the genes alone cannot explain these differences. They must depend on formative influences that act differently in different organs and tissues as they develop. These influences cannot be inside the genes: they extend over entire tissues and organs. At this stage, in most conventional explanations, the concept of the genetic program fades out.

High hopes about the Human Genome Project were transformed into disappointment by the actual project results. One manifestation of those hopes happened at the press conference in the White House on June 26, 2000, President Clinton said, “We are here today to celebrate the completion of the first survey of the entire human genome. Without a doubt this is the most important, most wondrous map ever produced by mankind. It will revolutionize the diagnosis, prevention and treatment of most, if not all, human diseases … Humankind is on the verge of gaining immense, new power to heal.” The optimism that life would be understood if molecular biologists knew the “programs” of an organism gave way to the realization that there is a huge gap between gene sequences and actual human beings. In practice, the predictive value of human genomes turned out to be small, in some cases less than that achieved with a measuring tape. Recent “genome-wide association studies” compared the genomes of 30,000 people and identified about fifty genes associated with tallness or shortness. To everyone’s surprise, taken together, these genes accounted for only about 5 percent of the inheritance of height. In other words, the “height” genes did not account for 75 to 85 percent of the heritability of height. Most of the heritability was missing. Many other examples of missing heritability are now known, including the heritability of many diseases, making “personal genomics” of very questionable value. Since 2008, in scientific literature this phenomenon has been called the “missing heritability problem.”

There is a growing recognition that some acquired characteristics can indeed be inherited. This kind of inheritance is now called “epigenetic inheritance.”

I carried out many further filmed observations of Jaytee’s behavior, and did similar experiments with other dogs, notably a Rhodesian Ridgeback called Kane. Again and again, on film and under controlled conditions, these dogs anticipated their owners’ returns. The most remarkable animal I have come across is an African gray parrot, N’kisi. Its owner and I set up a controlled experiment in which she viewed a series of photographs in a random sequence while she was in a different room on a separate floor, being filmed continuously. The pictures were in a random order and represented twenty words in N’kisi’s vocabulary, like “flower,” “hug” and “phone.” Meanwhile, N’kisi, who was alone, was also filmed continuously. He often said words that corresponded to the image she was viewing, and did so much more frequently than expected by chance. The results were highly significant statistically.

In an attempt to find out more about telepathy in modern societies, I launched a series of appeals for information through the media in Europe, North America and Australia. Over a period of fifteen years, I built up a database of human experiences, similar to my database on unexplained powers of animals, with more than four thousand case histories, classified into more than sixty categories.

I designed a simple procedure to test both the chance-coincidence theory and the unconscious-knowledge theory experimentally. I recruited subjects who said they quite frequently knew who was calling before answering the phone. I asked them for the names and telephone numbers of four people they knew well, friends or family members. The subjects were then filmed continuously throughout the period of the experiment alone in a room with a landline telephone, without a caller ID system. If there was a computer in the room, it was switched off, and the subjects had no mobile phone. My research assistant or I selected one of the four callers at random by the throw of a die. We rang up the selected person and asked him to phone the subject in the next couple of minutes. He did so. The subject’s phone rang, and before answering it she had to say to the camera who, out of the four possible callers, she felt was on the line. She could not have known through knowledge of the caller’s habits and daily routines because, in this experiment, the callers rang at times randomly selected by the experimenter. By guessing at random, subjects would have been right about one time in four, or 25 percent. In fact, the average hit rate was 45 percent, very significantly above the chance level. This above-chance effect has been replicated independently in telephone telepathy tests at the universities of Freiburg, Germany, and Amsterdam, Holland.

One of the very few systematic observations of animal behavior before, during and after an earthquake concerns toads in Italy. A British biologist, Rachel Grant, was carrying out a study of mating behavior in toads for her PhD project at San Ruffino Lake in central Italy in the spring of 2009. To her surprise, soon after the beginning of the mating season in late March, the number of male toads in the breeding group suddenly fell. From more than ninety being active on March 30, there were almost none on March 31 and in early April. As Grant and her colleague Tim Halliday observed, “This is highly unusual behavior for toads; once toads have appeared to breed, they usually remain active in large numbers at the breeding site until spawning has finished.” On April 6, Italy was shaken by a 6.4-magnitude earthquake, followed by a series of aftershocks. The toads did not resume their normal breeding behavior for another ten days, two days after the last aftershock. Grant and Halliday looked in detail at the weather records for this period but found nothing unusual. They were forced to the conclusion that the toads were somehow detecting the impending earthquake some six days in advance.

Skeptical organizations are the principal defenders of the belief that psychic phenomena are illusory: they seek to debunk or deny any evidence that suggests they might be wrong. The best established of these groups is the US Committee for Skeptical Inquiry (CSI). The effects of these well-organized and well-funded skeptical campaigns are not simply intellectual, but political and economic. Through maintaining the taboo against the “paranormal,” they ensure that most universities avoid this controversial area altogether, despite great public interest in the subject.

The physiologist Hermann von Helmholtz, who played such an important part in the establishment of the principle of conservation of energy in living organisms, dismissed the possibility of telepathy out of hand: “Neither the testimony of all the fellows of the Royal Society, nor even the evidence of my own sense, would lead me to believe in the transmission of thought from one person to another independently of the recognized channels of sense. It is clearly impossible.”

Discovering and testing new drugs is a lengthy and increasingly expensive process, and drug companies try to make as much money as possible from their drugs while patents last. They inevitably devote enormous amounts of money to advertising and promotion. In order to bolster the drugs’ scientific credibility, they offer large fees to scientists to put their names to articles that have been ghostwritten by authors paid by the drug company, or else the scientists are given other inducements to lend their names to studies they have not done. Medical ghostwriting takes many forms, but a recent case gives some insight into what is involved. In 2009, around fourteen thousand women who developed breast cancer while taking Prempro, a hormone replacement therapy (HRT), sued the drug’s manufacturer, Wyeth. In court, it turned out that many of the medical research papers supporting HRT had been ghostwritten by a commercial medical communications company called DesignWrite.

…In several other clinical trials, Prozac was no better than the placebo… However, the drug company, Eli Lilly, did not publish the results of unsuccessful trials, which were revealed only because an independently minded researcher, Irving Kirsch, managed to obtain the data using the US Freedom of Information Act. He found that when all the data were taken into account, not just the positive results published by the manufacturers, Prozac and several other antidepressants turned out to be no more effective than placebos, or than a herbal remedy, St. John’s wort, which is far cheaper.

When I was studying at Cambridge, one of our physiology lecturers, Fergus Campbell, gave a demonstration of the powers of hypnosis using one of my fellow students as a subject. Campbell told the subject that he was carrying out a scientific experiment on the response of skin to heat, and would be touching the subject’s arm with a lighted cigarette. In fact he touched it with the flat end of a pencil. Soon afterward, the skin reddened and a blister appeared where the cool pencil had touched. I later learned that many other hypnotists had shown the same thing, and that it had been studied, but not explained, by medical researchers. Nerves that control small arteries in the skin mediate this burn response. People cannot will themselves to activate these nerves, which are under the control of the autonomic or involuntary nervous system. Yet the hypnotic induction of burns shows that suggestion can work through the autonomic nervous system.

Hypnosis can also produce “miracle cures,” as it did in the case of a boy in London in the 1950s. He was born with a thick, dark skin, and as he grew, most of his body was covered with a black, rough casing. Doctors said he had been born with ichthyosis, “fish-scale disease.” Treatment at some of London’s best hospitals did no good. Even a skin transplant from his normal chest to his hands proved worse than useless: the skin blackened and then shrank, stiffening his fingers. Albert Mason, a young doctor interested in hypnosis, heard of the case and, watched by a dozen skeptical colleagues, put the boy into a hypnotic trance. He told him, “Your left arm will clear.” It did. About five days later the coarse outer layer of skin became crumbly and fell off. The skin underneath soon became pink and soft. Through repeated hypnosis, Mason cleared other parts of the body, limb by limb. In a follow-up study three years later, Mason and a team of dermatologists confirmed that “not only has there been no relapse, but his skin has continued to improve.”

Numerous studies in the United States and elsewhere have shown that people who are religious, especially those who regularly attend religious services, live significantly longer, and have better health and less depression than people without religious faith. These effects were found with Christian and non-Christian groups. For example, in a study in North Carolina, Harold Koenig and his colleagues tracked 1,793 subjects who were over sixty-five years old with no physical impairments at the beginning of the study. Six years later, those who prayed had survived 66 percent more than those who did not pray, after correcting for age differences between the two groups.

In a recent systematic study, some Dutch psychologists at the University of Amsterdam contacted the authors of 141 papers published in leading psychology journals, asking for access to the raw data for the sake of reanalysis. All these journals required authors to sign an undertaking that they “would not withhold the data on which their conclusions are based from other competent professionals.” After six months and four hundred emails, the Amsterdam researchers received sets of data from only 29 percent of the authors.

One of the biggest cases of fraud to be exposed in physics in the twenty-first century concerned Jan Hendrik Schön, a young researcher on nanotechnology at Bell Laboratories, in New Jersey. He seemed brilliantly successful and amazingly productive, making breakthrough after breakthrough and receiving three prestigious awards. But in 2002, several physicists noticed that the same data appeared in different papers, apparently from different experiments. An investigating committee found sixteen instances of scientific misconduct, mostly the making up or recycling of data. Significantly, none of these instances of fraud was detected in the peer-review process. In another recent case, Marc Hauser, a Harvard professor of biology, was found guilty of scientific misconduct by an official inquiry at Harvard in 2010. He had falsified or invented data in experiments on monkeys. Again, his dishonesty was not detected by peer reviewers, but came to light when a graduate student blew the whistle. Hauser is the author of a book called Moral Minds: The Nature of Right and Wrong (2007), in which he claims that morality is an inherited instinct, produced by evolution and independent of religion. Hauser is an atheist, and claims his findings support an atheist point of view.