Tuesday, March 15, 2011

Links about the situation in Japan

---
This page is to keep the links about the events in Japan. The links to the "original" websites are going over CoralCDN - hence the ".nyud.net" appended to the domain name. Read more about the CoralCDN at http://www.coralcdn.org/. Chances are that not all of the websites may bear the load they will get - so use the CoralCDN so the sites do not die. I'll update this post as I find more links.
---

Countermeasures for 2011 Tohoku - Pacific Ocean Earthquake





Some baseline on the radiation:

Smoking: "Based on careful assessments of the concentrations of 210Po in the lung tissues, it was estimated that the "hot spots" received an annual dose of about 160 millisievert (about 16,000 millirem), two of the more common units for expressing doses from ionizing radiation." [Health Physics Society]. Divided by 8760 (24*365), this gives 18.26 microsievert/hour delta atop the background radiation levels. Another source with radiation in cigarettes. So if you are smoking you probably should take a note of this.

Here's another image, a scan from a book, I found it on a blog entry in Japanese about radiation impacts. 1 rad = 10 mSv = 10 mGy, the author writes;

radiation-normal-tissues

And one more reference about radiation effects. And a diagram for comparing the various sources.

So, after setting up this baseline, you can go and look at the data.

Facts:

Geiger counter Chiba
Geiger counter in Tokyo
Video of a geiger counter in Tokyo
One more video of a geiger counter in Tokyo.
Google doc with info of the three counters above and the radiation from http://www.bousai.ne.jp/eng/index.html.
Japan radiation open data (from the maintainer of the above google doc)
Graphical dashboard based on these values
A geiger counter on the translatlantic flight
crowdsourcing data on radiation


Saitama prefecture readings

List of 5.0+ earthquakes for the past 7 days


Articles:

Graphic showing the radiation levels at the power plants vs. the various reference points
http://www.simon-cozens.org/content/radiation-tokyo-how-read-geiger-counter - has good explanation on how to read the counters and what the numbers relate to. This is where I got the first geiger counter link above.
Articles by MIT Department of Nuclear Science and Engineering about the japanese nuclear reactors.
Some Perspective On The Japan Earthquake

TV/Video:

MIT technical briefing recording
http://www.ustream.tv/channel/yokosonews
http://www.ustream.tv/channel/nhk-world-tv
http://www.ustream.tv/channel/nhk-gtv
http://www.ustream.tv/channel/tbstv
http://www.youtube.com/tbsnewsi
http://www.earthcam.com/japan/tokyo/
Fukushima Daiichi Nuclear Power Station camera
One other Fukushima webcam

Twitter:
A person who was translating the TBS
Reuters. Level-headed reporting without the hysteria.
periodic reports on radiation levels

TEPCO press releases:

http://www.tepco.co.jp/en/press/corp-com/release/index-e.html

NASA:

the Japanese earthquake should have caused Earth to rotate a bit faster, shortening the length of the day by about 1.8 microseconds

Japan Atomic Industrial Forum

http://www.jaif.or.jp/english/

INES levels:

http://www.iaea.org/Publications/Factsheets/English/ines.pdf

IAEA briefings:

Briefing videos
IAEA updates page

Networking-related:

http://www.jpnap.net/english/jpnap-tokyo-i/traffic.html
http://gigaom.com/broadband/in-japan-many-under-sea-cables-are-damaged/


Tuesday, March 8, 2011

Autoextraction of Abstracts from RFCs and drafts

An idee fixe (uh, I mean *one more*) of mine is to somehow organize a collection of IETF docs - RFCs/drafts that are somehow touching IPv6 (Thanks to Fred Baker for this nice puzzle).

So, what I have is 140 megabytes of data, sitting in just under 2000 files that represent the RFCs and various drafts.

First step of doing anything at all with this pile is to be able to chop it into some chunks - try to put the congruent parts side by side, move the ASCII pictures aside, and similar mundane tasks.

The first step of doing that is to try to extract the part that is there in almost every IETF doc - the "Abstract" section. In general, the section titles are starting with 0-column indent - while the text of the paragraphs typically has 2+ columns indentation. However, this is a general rule. There are zillions of exceptions over the years. Variations of spelling, wrong indents, MS-DOS carriage returns, all sorts of nasty mess. Anyway, the first try at this has concluded.

I extract the titles out of the pagebreak-placed titles and this is noticeable - in some of them you have the month and the year glued on the right side. This is something that should get fixed eventually, if I figure some heuristic.

Here's a result in case you find it useful at all:

Abstracts from some RFCs and drafts.

Sunday, March 6, 2011

Interesting bits from HTTP/1.1 RFC

(Originally under title "Is your server HTTP/1.1 compliant ?" - but I realised that it's not really a relevant one)

Today after watching this excellent wireshark kung-fu video with Hansang Bae, I decided to comb through the HTTP/1.1 spec and see what other interesting bits I can fish from there that are less frequent/interesting to play with or are othewise noteworthy.
Here they go for your entertainment.

To allow for transition to absoluteURIs in all requests in future
versions of HTTP, all HTTP/1.1 servers MUST accept the absoluteURI
form in requests, even though HTTP/1.1 clients will only generate
them in requests to proxies.

In layman terms: your compliant server must understand not only the classic "GET / HTTP/1.1", but also "GET http://www.yourhost.com/ HTTP/1.1"). In case the clients upgrade. All but one of the servers that I did a quick test with, haven't seen this part of the spec. Or optimized it out.

An origin server that does not allow resources to differ by the
requested host MAY ignore the Host header field value when
determining the resource identified by an HTTP/1.1 request. (But see
section 19.6.1.1 for other requirements on Host support in HTTP/1.1.)

Immediately after that follows a big blurb how the server is supposed to derive the host name from the absolute URI. I.e. the one that almost no-one seems to support. So the implementations deliberately ignore the spec. Or are not attentive in reading it ?
The in-progress work from HTTPBis workgroup in IETF also specifies the absolute URIs. So, some house cleaning will be in order.

A very interesting bit about the pipelining:

Clients which assume persistent connections and pipeline immediately
after connection establishment SHOULD be prepared to retry their
connection if the first pipelined attempt fails. If a client does
such a retry, it MUST NOT pipeline before it knows the connection is
persistent. Clients MUST also be prepared to resend their requests if
the server closes the connection before sending all of the
corresponding responses.


This brings 'must be prepared to act robust' general statement makes me think of all sorts of interesting failure modes (yes, and indeed I've seen some of those in real life) - however, this 'retry' also brings potential L7 hook to the Happy Eyeballs logic. In some sort, maybe, later. Having a L7 hook would be a good thing - application may have a better idea about failures than layer 3/4. Anyway, I digress.

Another interesting piece:


This means that clients, servers, and proxies MUST be able to recover
from asynchronous close events. Client software SHOULD reopen the
transport connection and retransmit the aborted sequence of requests
without user interaction so long as the request sequence is
idempotent (see section 9.1.2).


This is also a HUGE RED FLAG to the application developers: never EVER use "GET" for anything that is not idempotent. Theoretically the server should not see the two requests, but with some conditions it might (say, proxy inbetween ?) All of this is still true for the specs that are being prepared in HTTPBis WG now.

Here's the well-known humorous piece:


Clients that use persistent connections SHOULD limit the number of
simultaneous connections that they maintain to a given server. A
single-user client SHOULD NOT maintain more than 2 connections with
any server or proxy.


Year right. Web2.0 apps do exactly that. Not.


The Max-Forwards request-header field MAY be used to target a
specific proxy in the request chain. When a proxy receives an OPTIONS
request on an absoluteURI for which request forwarding is permitted,
the proxy MUST check for a Max-Forwards field. If the Max-Forwards
field-value is zero ("0"), the proxy MUST NOT forward the message;
instead, the proxy SHOULD respond with its own communication options.


Is there already a HTTP-level "traceroute" to poke at caches on the way ?

Kind of obvious, but interesting clarification nonetheless:


The fundamental difference between the POST and PUT requests is
reflected in the different meaning of the Request-URI. The URI in a
POST request identifies the resource that will handle the enclosed
entity. That resource might be a data-accepting process, a gateway to
some other protocol, or a separate entity that accepts annotations.
In contrast, the URI in a PUT request identifies the entity enclosed
with the request -- the user agent knows what URI is intended and the
server MUST NOT attempt to apply the request to some other resource.


The "correct" code for post-POST redirections should be 303, not the 302, but 302 was for "older clients":


Note: Many pre-HTTP/1.1 user agents do not understand the 303
status. When interoperability with such clients is a concern, the
302 status code may be used instead, since most user agents react
to a 302 response as described here for 303.


A fun fact while I tested this, is that both Firefox and Chromium send exactly 21 request before giving up and saying "it's a redirect loop". Even if the target URIs in the "Location:" header in the reply are different. Buyer, beware. It's more than "old 5" that the spec warns about - but there's no ultra-clever heuristics to get the redirect loop, either.

This stops on section 12, I'll maybe go through the rest tomorrow and see if I can gather some other interesting pieces.

Saturday, March 5, 2011

Hardware hacking: Peltier TEG experiment - parts list.

During the discussion over a beer at Betagroup yesterday, the topic of human-power for the devices came up. Today I stumbled upon this one and could not help but grab a couple of the "energy harvesters". They supposedly can generate up to 2v at the temperature delta of 75C. The 75C difference between the 36C of my body (hot side) and the "cold side" of -39C would mean it's a pretty useful device for a winter in Siberia.

So, let's try a different approach with a LTC3108EGN - the step-up converter.
The schematic suggests to use the CQ200 from Honeywell. But with such a form factor I am not sure I would be much interested to experiment with it :-)

Also let's get a coil that is needed. The capacitors I've in my pile already, should be no problem.

You wonder what this will be used for ? Well, of course my beloved ATTiny45, which is supposed to have a pretty low power consumption - the "V" models boast 300 microamps at 1.8V when running at 1mhz. This should be perfect, if I estimated correctly the amount of electricity the TEG will generate at low delta.

All right, now just wait till all the parts get here and then a bit of soldering and we see if this idea works.