Does Antimatter Fall UP?

Does matter repel antimatter? Or, does gravity always suck? Physicists would like to know! And a recent article has the title:  Can the new Neutrino Telescopes reveal the gravitational properties of antimatter?

The abstract reads:
We argue that the Ice Cube, a neutrino telescope under construction at the South Pole, may test the hypothesis of the gravitational repulsion between matter and antimatter. If there is such a gravitational repulsion, the gravitational field, deep inside the horizon of a black hole, might create neutrino-antineutrino pairs from the quantum vacuum. While neutrinos must stay confined inside the horizon, the antineutrinos should be violently ejected. Hence, a black hole (made from matter) should behave as a point-like source of antineutrinos. Our simplified calculations suggest, that the antineutrinos emitted by supermassive black holes in the centre of the Milky Way and Andromeda Galaxy, could be detected at the Ice Cube.

I had never thought about this question. But it may be that in the far, far future (after all the stars die) that our descendants (probably machines?) orbit around black holes to capture the energy in the antineutrinos being ejected.

(By the way, where would the energy come from, the black hole or the "quantum vacuum" itself?)

The Borders of Privacy

In my previous post, I defined privacy piracy. My focus was on for-profit companies pirating personal information. Here I want to mention that governments are increasingly restricting our right to electronic privacy as well.

As an example, U.S. border officials can seize and search laptops, smartphones and other electronic devices for any reason. The ACLU is suing, with the stated goal that "...the government has to have some shred of evidence they can point to that may turn up some evidence of wrongdoing” before such searches can be performed. The American Civil Liberties Union cites government figures and estimates 6,500 persons have had their electronic devices searched along the U.S. border since October 2008. No mention of how many terrorists were caught.

So what can the average computer geek do to protect his privacy?

One solution is encryption. TrueCrypt is an open source software, free to download, that provides a way to encrypt files, partitions, and even a laptop's entire operating system. There are versions for both Windows and Linux, although complete OS encryption is available for Windows only. See the complete documentation here. I have used the software to encrypt Windows operating systems, partitions on Linux systems, and numerous files on both operating systems. TrueCrypt performed without error and did not seem to affect performance adversely. (I noticed my CPU utilization went up a bit, but my CPU had no difficulty keeping up with the hard disk. A CPU can decrypt/encrypt data faster than the hard drive can read/write it.) Just be sure to follow their password recommendations. IMHO, the algorithms used by TrueCrypt should be quite robust to even the most sophisticated decryption efforts that nefarious governments can mount.

It is possible to observe that a file has been encrypted -- all that completely random looking data constituting the entire file.  Recent U.S. case law suggests that government agents, during a laptop search, could notice an encrypted file and then be able to compel one to divulge one's password for it. (Fifth Amendment protections not withstanding.) They could then use the password to gain access and decrypt the data contained in the file.

To deal with this privacy commandeering, TrueCrypt has a couple of plausible deniability tricks. One trick is to hide an encrypted volume within an encrypted volume, each having separate passwords. The inner volume is undetectable. Which volume is accessed depends on which password is used. This trick allows a person to reveal the password of the outer encrypted file but "forget" to mention the inner encrypted volume. Another trick is the ability to hide an entire operating system (Windows only) behind a decoy encrypted operating system.

However, like most, although I like to rant against nefarious governments my real concern is having my laptop stolen. A web search revealed inconsistent statistics, but would I guess anywhere from 100,000 to 500,000 laptops are stolen each year. So my bigger worry is to have some thief get his hands on my private and financial data. This includes not only bank statements and brokerage account information, but related data in my operating environment such as cookies and the contents of my swap file.

Again, what to do?

I create a virtual machine that I exclusively use for my online financial transactions and private communications. I then store the virtual machine on a TrueCrypt volume on my laptop. Therefore, if my laptop is ever stolen, the thieves will be able to find all about my laptop web surfing habits, but nothing truly sensitive or potentially damaging that I store on the virtual machine.

BTW, I haven't overlooked that smartphones contain a lot of private data too. I'll address smartphone encryption in a later post.

Privacy Piracy

Define Internet privacy piracy as the unauthorized collection, analysis, and distribution of personal information by third parties for profit. My questions are: are the pirates taking over the Internet? Are they changing the architect of the Internet to favor themselves at our expense? Are they making it easier for government espionage?

In an article in the WSJ, it is claimed that ("don't be evil") Google has begun to aggressively cash in on its vast trove of data about Internet users. Google had feared a public backlash. "But the rapid emergence of scrappy rivals who track people's online activities and sell that data, along with Facebook Inc.'s growth, is forcing a shift." Also, according to Mr. Eric Schmidt, "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place." Not exactly comforting words. (Note: See this article for a defense of Google from Wired Magazine.)

Also, from an LATimes Technology Blog post:
Apple Inc. is now collecting the "precise," "real-time geographic location" of its users' iPhones, iPads and computers.
In an updated version of its privacy policy, the company added a paragraph noting that once users agree, Apple and unspecified "partners and licensees" may collect and store user location data.
When users attempt to download apps or media from the iTunes store, they are prompted to agree to the new terms and conditions. Until they agree, they cannot download anything through the store.
The large Internet firms could introduce technologies that would make it EASY for users to protect their privacy. But will these firms do so? For example, Microsoft had the chance to redesign Internet Explorer to make it more privacy friendly. But evidently did not.

In future posts, I would like to discuss approaches the average software geek can take to help protect online privacy.

A Note on the Quality of the Climate Model Software

In response to a comment to my last post, I wrote that:
IMHO, the predictive skill of the climate models have not been formally and empirically demonstrated (as in IV&V).

This is the same position I had back in March, when I posted a note on the current state of the climate model software.

Jon Pipitone has performed a study of the quality of software in climate modeling. I mention Pipitone's work because it was brought to my attention that Steve Easterbrook links to it in a statement he made in a blog post yesterday:
Our research shows that earth system models, the workhorses of climate science, appear to have very few bugs...
Does not a statement like this AMHO (affect my humble opinion)? Unless I take it out of context -- no. What is the context?

In a blog post describing Jon Pipitone's work, Easterbrook writes:
I think there are two key results of Jon’s work:

1. The initial results on defect density bear up. Although not quite as startlingly low as my back of the envelope calculation, Jon’s assessment of three major GCMs indicate they all fall in the range commonly regarded as good quality software by industry standards.

2. There are a whole bunch of reasons why result #1 may well be meaningless, because the metrics for measuring software quality don’t really apply well to large scale scientific simulation models. [Emphasis added.]

And in the Conclusion of his thesis Pipitone writes:
The results of our defect density analysis of three leading climate models shows that they each have a very low defect density, across several releases. A low defect density suggests that the models are of high software quality, but we have only looked at one of many possible quality metrics. As we have discussed, knowing which metrics are relevant to climate modelling software quality, and understanding precisely how they correspond the climate modellers notions of software quality (as well as our own) is the next challenge to take on in order to achieve a more thorough assessment of climate model software quality. [Emphasis added.]

We found a variety of code faults from our static analysis. The majority of faults common to each of the models are due to unused code or implicit type manipulation. From what we know of the construction of the models, there is good reason to believe that many of these faults are the result of acknowledged design choices -- most significantly are those that allow for the flexible configuration of the models. Of course, without additional study, it is not unknown whether the faults we have uncovered point to weaknesses in the code that result in operational failures, or generally, what the impact is of these faults on model development and use. [Emphaisi added.]

And in describing possible threats to the validity of his thesis, Jon writes:
We do not yet understand enough about the different types of climate modelling organisations to hope to make any principled sampling of climate models that would have any power to generalize. [Emphaisis added.] Nevertheless, since we used convenience and snowball sampling to find modelling centres to participate in our study we are particularly open to several biases [10]. For example:

* Modelling centres willing to participant in a study on software quality may be more concerned with software quality themselves;

* Modelling centres which openly publish their climate model code and project artifacts may be also be more concerned with software quality;

In addition, our selection of comparator projects was equally undisciplined. We simply choose projects that were open-source, and that were large enough and well-known enough to provide an intuitive, but by no means rigorous, check against the analysis of the climate models. We have also chosen to include a model component, from centre C1, amongst the GCMs from the other centres we analysed. Even though this particular model component is developed as an independent project it is not clear to what extent it is comparable to a full GCM.

Our choice to use defect density and static analysis as quality indicators was made largely because we had existing publications to compare our results with, not because we felt these techniques are necessarily good indicators. Furthermore, whilst gauging software quality is known to be tricky and subjective, most sources suggest that it can only accurately be done by considering a wide range of quality indicators [21, 3, 1, 17]. Thus, at best, our study can only hope to present a very limited view of software quality. [Emphasis added.]

Thus, "there are a bunch of reasons" why Easterbrook's statement "may well be meaningless".