Unfortunately our temperature visualization from last post is currently not running anymore. Probable reason: It currently seems that WebGl Earth has moved two library files. In particular the WebGl earth API base script which we were thinking was self-contained unfortunately doesn’t seem to be self-contained. We are going to look at this trouble in the near future.
As was announced in the last post Tim and me were working at a visualization of the data collection CRUTEM 4 by the climate research unit (CRU) at the University of East Anglia. In the post it was mentioned that the data in that collection was sort of “deteriorating”. That is on one hand the number of active temperature measurement stations which were listed in this file (some stations started measuring already in the 18th century) decreased rather rapidly in the last ten years and/or the file contained increasingly invalid/no temperature data in the last ten years.
In that context it is worthwhile to note that CRUTEM 4 supercedes CRUTEM 3 and the CRUTEM 3 (grid data) was according to the Intergovernmental panel on Climate Change (IPCC) used for the IPCC fourth assessment report (AR 4).
Wether the “deterioration of that CRUTEM 4 data” has any effect on the assessment of the current global warming trends is another question. The application is now online. Explore yourself! Caution the data takes very long to load. The CRUTEM 4 data file is about 45 MB.
The following two interactive applications also display global temperature data:
- HADCRUT 3 (which uses CRUTEM 3) data is visualized here by Cliff Best.
- NOAAs Global Historical Climatology Network-Monthly (GHCN-M) is visualized here by Nick Stoves.
- our comparision of temperature anomalies, CO2 and methane values uses HADCRUT 4 which uses CRUTEM 4 and HadSST3 (sea surface temperatures).
Unfortunately the application is currently not running anymore. Probable reason: It currently seems that WebGl Earth has moved two library files. In particular the WebGl earth API base script which we were thinking was self-contained unfortunately doesn’t seem to be self-contained. We are going to look at this trouble in the near future.
Tim and me are currently working on a interactive browser visualization using temperature data from HADCRUT, namely the CRUTEM 4 temperature station data which we map with the help of the open source web GL earth API (which seems to be to quite some extend the work of the Czech-Swiss company Klokan technologies) onto a model of the earth (covered with open street maps).
The visualization is still work in progress, but what is already visible is that the temperature data is quite deteriorating (please see also the previous randform post on the topic of deteriorization of data). Where it looks as if the deterioration had been bigger in the years from 2000-2009 than in the years 1980-2000. Below you can see screenshots of various regions of the world for the month of January for the years 1980, 2000 and 2009. The color of a rectangle indicates the (monthly) temperature value for the respective station (the station is represented by a rectangle around its coordinates) which is encoded with the usual hue encoding (blue is cold, red is hot). Black rectangles are invalid data. The CRUTEM 4 data file contains the data of 4634 stations. Mapping all the station data makes the visualization very slow, especially for scaling, therefore the slightly different scalings/views for each region and the fact that screenshots are on display. The interactive application will probably be not for all stations at once.
I am still collecting data on global employment in order to better investigate the replacement of human work by machines. Unfortunately it turned out that the International Labour Organisation (ILO), which holds most of the original data restructured their IT-sector. This means in particular that some older data can’t be reproduced any more. Above you can see that the worldwide employment went down on average since the nineties. I keep the data now here locally on our account as a copy from ILO in order to keep the findings reproducible. The data source as well as the source code for extracting it (GPL) are here. As always: if you spot some mistakes please let me know.
This concerns a discussion on Azimuth. I found that the temperature anomaly curve, which describes the global combined land [CRUTEM4] and marine [sea surface temperature (SST)] temperature anomalies (an anomaly is a deviation from a mean temperature) over time (HADCRUT4-GL) has a two-year periodicity (for more details click here). The dots in the above image shall display, why I think so. The dark line drawn over the jagged anomaly curve is the mean curve. The grey strips are one year in width. A dot highlights a peak (or at least an upward bump) in the mean curve. More precisely there are:
18 red dots which describe peaks within grey 2-year interval
5 yellow dots which describe peaks out of grey 2-year interval
(two yellow peaks are rather close together)
1 uncolored dot which describes no real peak, but just a bump
4 blue dots which describe small peaks within ditches
One sees that the red and yellow dots describe more or less all peaks in the curve (the blue dots care about the minor peaks, and there is just one bump, which is not a full peak). The fact that the majority of the red and yellow dots is red, means that there is a peak every 2 years, with a certain unpreciseness which is indicated by the width of the interval.
Upon writing this post I saw that I forgot one red dot. Can you spot where?
Especially after doing this visualization this periodicity appears to me meanwhile so visible that I think this should be a widely known phenomenom, however at Azimuth nobody has heard yet about it. If its not a bug then I could imagine that it could at least partially be due to differences in the solar irradiance for northern and southern hemissphere, but this is sofar just a wild guess and would need further investigations, which would cost me a lot of (unpaid) time and brain. So if you know how this phenomen is called then please drop a line. If its not a bug then this phenomen appears to me as an important fact which may amongst others enter the explanation for El Niño.
This is just a a very brief follow-up to my last post in which I was looking at the market sizes of virtual assets.
techdirt has a blog post in which it is described that apparently the NSA uses gamification for making the use of the XKeyscore system more appealing.
I guess although here a game is used as an introduction for a virtual application this type of game wouldn’t fall into the free-to-play category, from superdataresearch:
One important trend in this context is the emergence of free-to-play or virtual goods revenue model. It allows the next generation of gamers to try a game before they commit any money, offering them a smooth introduction to games rather than asking for $50-$60 at the door.
I am currently trying to gather some data on the size of the games/virtual goods market and in particular the size of the corresponding work force. According to the company superdataresearch the virtual goods market is now at about 15 billion $.
To get a feeling for the size of this market I was looking for some other market sizes so like I found the global market size for the production of drugs in 2003 was somewhat similar in size, namely around 13 billion $ (page 16 in the world drug report):
“The value of the global illicit drug market for the year 2003 was estimated at US$13 bn at the production level, $94 bn at the wholesale level (taking seizures into account), and US$322 bn at the retail level (based on retail prices and taking seizures and other losses into account).”
I couldn’t found though much on the workforce in this market.
Regarding again the games/virtual goods market superdataresearch writes:
APAC is the biggest region with $8.7 billion in total virtual goods sales, with China”s $5.1 billion market leading the pack.
For comparision, the US makes a share of about 3 billion $ according to Tech Crunch/Inside Virtual Goods:
“The overall market for virtual goods in the US is headed towards $2.9 billion for 2012, according to the Inside Virtual Goods report. That’s up from $2.2 billion this year, and $1.6 billion in 2010.”
Here I found as a comparision the US meat market which seems to have a size of about 7 billion dollar.
In the meat market the work force comprises around 44000 people. So if one would make the ad-hoc assumption that the game and the meat markets are approximately equally labour intensive (which is actually an interesting question) then about 20000 people in the US would make their living in the US game market. Likewise worldwide this would give roughly 100000 people.
Any more precise data in this direction is welcome.
There was recently a post on Gamasutra with the title: Titanfall: Why Respawn is punishing cheaters. The computer game Titanfall is a First person shooter that can be played with a couple of people in one environment. Wikipedia describes it as follows:
Players fight either on foot as free-running “Pilots” or inside agile mech-style walkers called “Titans” to complete team-based objectives on a derelict and war-torn planet as either the Interstellar Manufacturing Corporation (IMC) or the Militia.
I don’t know Titanfall (In general I have been playing first person shooters rather rarely) but what apparently happened was that there where too many people cheating in the game.
In the post it isn’t really described what exactly is implied by cheating, but what I refer from the “punishment” announcement, I think what was happening was that some people used game bots and in particular socalled aimbots, which are software solutions which make shooting easier in such a game. From the Titanfall announcement:
You can play with other banned players in something that will resemble the Wimbledon of aimbot contests. Hopefully the aimbot cheat you paid for really is the best, or these all-cheater matches could be frustrating for you. Good luck.
I was asking myself though wether this action is part of some viral marketing campaign. That is that some cheaters could think that it could be way cooler to “win the Wimbledon of aimbot contests” rather than the usual game. Given that Titanfall had however performance problems which as it seems where due to overloaded game servers and connections, it doesn’t though look as if this would improve with aimbot contests.
In this context:
In a citation about a report by a tech- and investment-advisory firm in the time article: The Surprisingly Large Energy Footprint of the Digital Economy
In his report, Mills estimates that the ICT system now uses 1,500 terawatt-hours of power per year. That’s about 10% of the world’s total electricity generation
The New York times article: Power, Pollution and the Internet remarks the following about e.g. US data centers:
Nationwide, data centers used about 76 billion kilowatt-hours in 2010, or roughly 2 percent of all electricity used in the country that year, based on an analysis by Jonathan G. Koomey, a research fellow at Stanford University who has been studying data center energy use for more than a decade. DatacenterDynamics, a London-based firm, derived similar figures.
A summary of the last IPCC report about climate change and global warming.
In Berlin there is currently the International games week Berlin.