banner

Archive

Archive for the ‘Information Technology’ Category

How it works: Building the Helioviewer “back end” with JPEG2000

June 16th, 2011 Comments off

Post 1 of 5: Explore the sun on your desktop with Helioviewer
Post 2 of 5: Getting Started with Helioviewer.org
Post 3 of 5: Explore the sun in depth with JHelioviewer
Post 4 of 5: How it works: building the Helioviewer “back end” with JPEG2000


New interactive visualization tools developed by the NASA/European Space Agency (ESA) Helioviewer Project allow scientists and the general public to explore images captured by solar observing spacecraft. Previous posts explained the origins and aims of the Helioviewer Project, and the basics of a Web-based app called Helioviewer.org. This post looks at the behind-the-scenes technology that makes Helioviewer possible.

The Solar Dynamics Observatory captures hi-def images of the sun and beams them down to Earth at a rate of one every few seconds.

The Solar Dynamics Observatory beams data to Earth at a rate of 150 Mb per second.


The Helioviewer.org Web app and the JHelioviewer software are the on-screen interfaces that users see. But there is also a critical data-processing “back end” that required just as much effort to develop. The challenge was this: How do you acquire and manipulate solar images quickly enough so that the process is truly “real time,” without long waiting times for downloads and glacial refresh rates on the image view every time you make a change, like zooming in on a feature of interest?

This is particularly challenging when working with high-resolution images from NASA’s Solar Dynamics Observatory. SDO sends down images that are 4,000 by 4,000 pixels, approximately the same number of pixels as in a 13 by 13 inch photographic print.

Google Maps and Google Earth overcame this issue by “tiling” large images into a checkerboard of smaller segments that could be quickly assembled into an image at the scale a user requested.

A Google Maps for the sun
The prototype of Helioviewer took this approach, too, following Google’s lead. “Google Maps was the original inspiration for it,” Helioviewer Project co-founder Jack Ireland says.

In the prototype of Helioviewer.org, each stage of a zoom-in required a complete set of tiles. The system retrieved the tiles it needed to build the view requested by the user with every click of the mouse. The trouble is, as you zoom in it requires an ever-increasing number of small tiles (numbering in the hundreds) to build the new image. Each tile is a separate file, and they all have to be labeled, stored, and pulled from storage and assembled when needed.

Then Helioviewer met JPEG2000, a standard for compressing images to make them extremely small while maintaining very good image quality. Also, JPEG2000 can extract sub-regions of the compressed image file without having to open the whole file.

In other words, the system generates only the part of the image you really want to see. If you have ever downloaded or extracted a very large compressed image file, you understand the time saving that JPEG2000 offers.

“One thing that changed early on that made a huge difference and made all this really possible is that we use this JPEG2000 technology,” Helioviewer Project co-founder Keith Hughitt explains. “Instead of generating all the possible tiles for every single image, we wait until the user asks for a tile and generate it right then, and only generate the ones we need. We were able to develop a way to do that quickly enough that you can do it right on the Web page.”

Data pipeline from Palo Alto
Lockheed Martin’s Solar and Astrophysics Laboratory, based  in Palo Alto, California, that built the Atmospheric Imaging Instrument aboard SDO, uses JPEG2000 to compress every third new SDO image (i.e. one every few seconds) and then sends them through a data pipeline to Goddard. The image can be available on Helioviewer’s server at Goddard in as little as 20 minutes.

The system needs to store this one compressed master file, not hundreds of tiles. That one image file — or a portion of it — can be quickly decompressed and displayed at the resolution needed.

For example, as you click the little “plus sign” icon on Helioviewer to zoom in on a flare on the surface of the sun, the back end of the system decompresses the same file multiple times at increasing resolution — like a telephoto lens capturing an image at ever higher magnification — and displays it on your computer screen.

This “on the fly” manipulation also applies to time-lapse videos made with JHelioviewer. “JHelioviewer tells the server which portion of the images it is interested in, and the video-stream is updated in real time so that only those bits are transmitted back to JHelioviewer,” Hughitt explains. “The result is a sort of ‘dynamic’ movie stream that you can create, and then adjust as you are playing it.”

This means that as the video plays, you can zoom, pan, sharpen, brighten, or follow a specific feature across the sun. If you choose to download the video, the server renders the final product at whatever settings you choose.

If not for JPEG2000, you would need to download an entirely new version of the video – amounting to gigabytes of data – every time you made a change.  Another way of saying this is “the Web back in the 1990s.”



LEARN MORE

Helioviewer.org (Web app)

A collection of video highlights from 2011 (so far) created by Helioviewer.org users.

See a Helioviewer.org video made by “citizen scientist” LudzikLegoTechnics on YouTube.

The Helioviewer Project Wiki

JHelioviewer (downloadable software)

Read a Web feature about JHelioviewer and its capabilities

The JHelioviewer online handbook

JHelioviewer video tutorial on YouTube HD

ESA Web feature about JHelioviewer.

_____________________________________________________________________________________________________
OH AND DID I MENTION? All opinions and opinionlike objects in this blog are mine alone and NOT those of NASA or Goddard Space Flight Center. And while we’re at it, links to websites posted on this blog do not imply endorsement of those websites by NASA.


Gogblog’s Excellent Atlas 5 Launch Adventure

August 14th, 2010 Comments off

I just got home from Goddard Space Flight Center, where I was “embedded” this morning — starting at 6 am! — at the Flight Dynamics Facility. This morning, the FDF helped to launch a massive Atlas 5 rocket carrying a military communications satellite into orbit. It was so cool! Our people do the calculations to allow NASA’s orbiting tracking satellite network to follow the Atlas from launch to orbit. Recently I wrote about their work supporting Space Shuttle launches.

At the FDF, you watch the whole thing in a 3-D computer animation environment as well as live on webcam. Here is the moment of launch, looking over the shoulders of the two of the FDF engineers who ran the show.

light that candle!

light that candle!



I don’t know about you, but when they hit that final “10…9…8…7…” there is something thrilling about it, like the moment when gamblers go “all in” with every chip they have and there’s no turning back. In this case, a million pounds of rocket, fuel, and satellite sit balanced perfectly on the pad and someone punches that final red button….. (ok, maybe it’s a final mouse click)

Anyway, days like this I feel like I have the coolest job in the world.

Soon I’ll post a full account of Gogblog’s Excellent Atlas 5 Launch Adventure, including exclusive video and animation of the launch.

_____________________________________________________________________________________________________ OH AND DID I MENTION? All opinions and opinionlike objects in this blog are mine alone and NOT those of NASA or Goddard Space Flight Center. And while we’re at it, links to websites posted on this blog do not imply endorsement of those websites by NASA.

What They’re Saying About Us: AirSpace Blog Highlights A Little Piece of Computer History Right Here At Goddard

July 6th, 2010 5 comments
The Goodyear Aerospace Massively Parallel Processor, at NASA/GSFC sometime in the 1980s.

The Goodyear Aerospace Massively Parallel Processor, at NASA/GSFC sometime in the 1980s.

Did you know that Goddard Space Flight Center occupies a little-known place in the history of computer design?

To find out all the details, read this historical post on the AirSpace blog by Paul Ceruzzi, a curator specializing in aerospace computing and electronics in the Division of Space History at the National Air and Space Museum (NASM).

Ceruzzi informs us that the leading Google search on NASM’s Steven F. Udvar-Hazy Center near Washington Dulles International Airport is “a pair of large blue boxes crammed full of circuit boards, tucked away in the northwest corner of the McDonnell Space hangar.”

The blue boxes are the Massively Parallel Processor. Ceruzzi explains in his blog post:

“The MPP was built for the Goddard Space Flight Center in Greenbelt, Maryland, by the Goodyear Aerospace Corporation of Akron, Ohio—a division of Goodyear well-known for its lighter-than-air craft, but a company that also was a pioneer in supplying advanced computers to military and aerospace customers. It was designed in the late 1970s, delivered to Goddard in 1983, and operated into the 1990s.”

What did the MPP do? Ok, very simply, a parallel computing machine divides a computing task up into pieces, solves each piece working in parallel, and puts the results all back together in the end. It’s the difference between a single worker with a shovel arduously digging a ditch, or 100 workers attacking the task simultaneously as a group.

This portion of the Discover supercompouting cluster racks up about 90 teraflops of number crunching power.

This portion of the Discover supercompouting cluster racks up about 90 teraflops of number crunching power.

One of the heir’s to the MPP sits down the hall from me. It is the Discover supercomputer, based on a parallel processing architecture. Soon it will have 29,000 processors working in parallel to solve massive computational tasks for the new NASA Center for Climate Simulation. You can read more about Discover on Gogblog if you want.

I’m hoping to find someone at Goddard who might have worked with the MPP and can tell us more about what they did with it.

It is therefore Gogblog’s good fortune to be located a few doors from Goddard’s Computational and Information Sciences and Technology Office.

CISTO friends and colleagues…anybody ever work with the MPP? Email me or post your comments on the blog to share with the public and your NASA colleagues.

Contact Gogblog: daniel[dot]a[dot]pendick[at sign]nasa[dot]gov

——–

Gogblog tips his massively parallel hat to Federal News Radio’s Christopher Dorobek for including this blog in the “Dorobek Must Reads” list for July 6.

_____________________________________________________________________________________________________
OH AND DID I MENTION? All opinions and opinionlike objects in this blog are mine alone and NOT those of NASA or Goddard Space Flight Center.


The Supercomputer Down the Hall: A Journey into the Guts of Goddard’s Discover Supercomputing Cluster

June 16th, 2010 Comments off
This portion of the Discover supercompouting cluster racks up about 90 teraflops of number crunching power.

This portion of the Discover supercomputing cluster racks up about 90 teraflops of number crunching power.

Have you ever seen a supercomputer? Do you know how one works?

I got a chance to look a supercomputer in the face recently, when I took an employee tour of the Discover supercomputer at Goddard Space Flight Center. It’s literally down the hall from me. I just never got a chance to see it up close since I started working here almost a year ago. Discover is the workhorse computing resource for the NASA Center for Climate Simulation.

It’s a pretty impressive gadget. Walking between the metal racks packed with equipment, multicolored blinky lights aglow, I thought of a famous scene in 2001: A Space Odyssey. The spaceship’s supercomputer, HAL, has gone all homicidal on the crew, so astronaut Dave Bowman climbs into its brain and starts to unplug stuff. Famously, this reduces the paranoid evil genius HAL to the level of a blubbering toddler singing “Daisy.”

__________________________________________________________________________

Blogolicious Supercomputer Facts

Goddard Space Flight Center’s Discover supercomputer can perform approximately 159 trillion calculations per second. The supercomputer consists of:

  • 14,968 processors
  • 12,904 memory modules
  • 35,608 gigabytes of random-access memory
  • 3,120 hard drives
  • 5 miles of copper cables
  • 6 miles of fiber-optic cables

______________________________________________________________________________________

Tubes and wires oh my! The ENIAC supercomputer.

Tubes and wires oh my! The ENIAC supercomputer.

I would bet that if you asked 10 people on the street to draw a supercomputer, they would produce something like HAL’s nerve center — a softly humming, dimly glowing cybercave.

Or, they might sketch something like ENIAC, the Electronic Numerical Integrator And Computer. Eighty feet long and weighing 27 tons, ENIAC contained more than 17,000 vacuum tubes.

To make computers really fast in those days, you had to place their various components close together so the electrical signals wouldn’t have to travel too far. Each “trip” meant a tiny delay. Many, many delays add up to a computing traffic jam.

These days, it’s different. Supercomputers like Discover are essentially collections of many, many  smaller-scale computing devices working in parallel to solve big tasks.

They are not necessarily in the same place, either. Discover’s machinery is spread across several rooms, connected by a high-speed data network. People can network into the system from across the country via data superhighways.

Now I’m going to talk some tech. And I’m going to be disgustingly precise about it. Supercomputer people talk nodes, processors, cores, and teraflops. It’s notoriously confusing, but you have to understand these terms to really get supercomputing. So here we go . . .

The functional unit of Discover is the processor, just like in your desktop PC or laptop (or iPhone or whatever). The processor is a little brain on a silicon chip. It does the number-crunching.

Waaayyyy back in the day — like, before 2005! — the motherboard of your computer sported a single processor on a single chip. If you wanted more processing power, you had to add more chips.

Not anymore. Now the little brain in your computer has multiple Central Processing Units (CPUs), or “cores,” working in parallel. The processor in my Mac Book Pro, for example, contains two cores. It’s an Intel Core 2 Duo. Both cores reside on the same chip, the same little slab of silicon.

So, are you still with me?

The Discover supercomputer uses dual-core and quad-core processors. In other words, each slab of silicon hosts two cores or four cores. For the ubergeeks in the house, the brand name of the latest processor is Intel Xeon Nehalem. (And yes, you can buy personal computers with this processor — the Mac Pro 2.66 GHz workstation, for example.)

Discover uses about 15,000 cores to crunch data. The cores exist within racks and racks of gizmos called nodes.

Each node has two Xeon Nehalem processors, for a total of either four or eight cores. So each node is equivalent to a really, really fast desktop computer, something with twice the horsepower of the aforementioned Mac Pro workstation. Each node has a hard drive for its operating system software as well as network interfaces for moving data in and out.

Blinky lights: one of the high-speed switches that connect Discover's computing nodes.

Blinky lights: one of the high-speed switches that connect Discover's computing nodes.

So what does this all mean? It means that the supercomputer at the heart of climate and weather science at NASA Goddard runs on the same kind of processors found in personal computers — perhaps yours.

The processors work in parallel, like an army of workers digging a canal with shovels. Each processor lifts a shovelful of data at a time, but if you have a lot of shovels, you end up with the Panama Canal.

Of course, the thousands of workers also need life support, like shelter, food, and water. In supercomputing terms, that means electricity and cooling systems to carry waste heat away from the processors.

A lot of clever engineering went into packing Discover into a couple of rooms. For example, the back doors of the equipment racks have heat-sucking radiators built into them. The radiators are hooked up to Goddard’s chilled water system. Having multiple cores on the same chip reduces the hardware required to prevent a cybermeltdown.

Although right now Discover crunches with 15,000 cores, a planned upgrade will bring it to around 29,000. And what does this all buy you? About 160 teraflops of computing power for the moment.

A teraflop is one trillion floating point operations per second. Flops measure the computing horsepower of a system, its ability to crunch numbers. Add two numbers in your head: you have just completed one floating point operation.

So what is 160 teraflops?

Get the entire world population to add two numbers every second for 5 hours and 20 minutes. That’s 160 teraflops!

_____________________________________________________________________________________________________
OH AND DID I MENTION? All opinions and opinionlike objects in this blog are mine alone and NOT those of NASA or Goddard Space Flight Center.