Friday, August 31, 2012

Building Effective HPC System - Big Data Challenge for Big Compute

As HPC system and applications grow more complex and large, the monitoring and analysis of performance are rising as a Big Data challenge. Millions of data points need to be collected rapidly, analyzed constantly to enable optimal placement of workloads and problem determination. 

NSF just funded a pair of universities to evaluate effectiveness of research HPC systems  and below is the news release from one of them.

University at Buffalo, TACC Receive Funding to Evaluate XSEDE Clusters
 
AUSTIN, TX, Aug. 30 -- A National Science Foundation (NSF) grant is funding the University at Buffalo and the Texas Advanced Computing Center (TACC) at The University of Texas at Austin to evaluate the effectiveness of high-performance computing (HPC) systems in the NSF Extreme Science and Engineering Discovery Environment (XSEDE) program and HPC systems in general.

Today's high-performance computing systems are a complex combination of software, processors, memory, networks, and storage systems characterized by frequent disruptive technological advances. In this environment, service providers, users, system managers and funding agencies find it difficult to know if systems are realizing their optimal performance, or if all subcomponents are functioning properly.

Through the "Integrated HPC Systems Usage and Performance of Resources Monitoring and Modeling (SUPReMM)" grant, the University at Buffalo and TACC will develop new tools and a comprehensive knowledge base to improve the ability to monitor and understand performance for diverse applications on HPC systems.

The close to $1 million grant will build on and combine work that has been underway at the University at Buffalo under the Technology Audit Service (TAS) for XSEDE and at TACC as part of the Ranger Technology Insertion effort.

"Obtaining reliable data without efficient data management is impossible in today's complex HPC environment," said Barry Schneider, program director in the NSF's Office of Cyberinfrastructure. "This collaborative project will enable a much more complete understanding of the resources available through the XSEDE program and will increase the productivity of all of the stakeholders, service providers, users and sponsors in our computational ecosystem."

"Ultimately, it will advance our goals of providing open source tools for the entire science community to effectively utilize all HPC resources being deployed by the NSF for open science research in the academic community," Schneider said.

Working with the XSEDE TAS team at Buffalo, TACC staff members are running data gathering tools on the Ranger and Lonestar supercomputers to evaluate data that is relevant to application performance.
"We gather data on every system node at the beginning and end of every job, and every 10 minutes during the job—that's a billion records per system each month," said Bill Barth, director of high-performance computing at TACC. "It's going to end up being a Big Data problem in the end."

The tools will present various views on existing XSEDE usage data from the central database, according to Barth. This data will include how individual user jobs and codes are performing on a system at a detailed level. In the coming year, the research and development effort will gather data and evaluate performance on all XSEDE systems, including Stampede, which will launch in January 2013.

"HPC resources are always at a premium," said Abani Patra, principal investigator of the University at Buffalo project. "Even a 10 percent increase in operational efficiency will save millions of dollars. This is a logical extension of the larger XSEDE TAS effort."

TAS, through the XSEDE Metric on Demand (XDMoD) portal, provides quantitative and qualitative metrics of performance rapidly to all stakeholders, including NSF leadership, service providers, and the XSEDE user community.

Work on the grant began on July 1, 2012, and will continue for two years.
-----
Source: Texas Advanced Computing Center

Update
  •  2012.08.31 - original post

Thursday, August 30, 2012

HPC Rulebook (#1) - Scaled Speedup

People some time asks me what high-performance computing is and how it is different from other types of computing such as desktop, handhold or cloud. In this series called HPC Rulebook, I will attempt to summerize a few unique and possibly defining characteristics of high-performance computing.

The first one is called Scaled Speedup.

High performance computing is accomplished by a scalable architecture to speed up computation. The problem is broken down into numerous digestable chunks and dispatched to dozens to thousands of work-horses (compute nodes) to compute and then the outcome aggregated into result. The speedup is a measurement of scalability by comparing the run time on many versus single system. In a well-architected system (balanced CPU, network IO and memory), a well-developed parallel application can scale to large number of nodes.

In real-life scenario, the computing problem is often fixed so the speedup eventually will be limited as there are just so much data to be computed and the amount of time it will take to divvy up the task will overrun the extra machines being added to the system.

However, in the frontier of science, research problem shouldn't be fixed so a new way of thinking is necessary to fully explore and extend the power of HPC. Therefore there is the notion of scaled speedup in which the problem set can also scale along with the computing power so a much larger problem set can be completed in a fixed amount of time. This scaled speedup is also known as the "Gustafson's Law".



Tuesday, August 28, 2012

IBM Strengthens SmartCloud with PaaS Offering

IBM announced today the pre-release of IBM SmartCloud Application Services, coming in September.

IBM SmartCloud Application Services (SCAS) is the new Platform as Service (PaaS) offering that will become accessible to all existing commercial SmartCloud Enterprise (SCE) clients within the next few weeks and to new SCE clients as they sign up. Please note that clients who sign up for SCE via the 2012 Fall Promotion will receive different services; see Fall Promo materials for details.

The IBM SmartCloud Application Services pre-release services will be enabled in existing SmartCloud Enterprise accounts during the month of September. IBM will roll out the services in waves and we expect all accounts to be enabled by the end of the month.

The benefits of IBM SmartCloud Application Services:

  • Self-service, instant access to an application development suite of tools, middleware and databases (available via pattern-based technology), optimized to run on a virtual infrastructure. IBM builds expertise into these pre-integrated patterns, which accelerate the development and delivery of new applications, eliminate manual errors, and drive consistent results.
  • An application environment that simplifies deployment and management of applications and automatically scales by adding capacity based on load.
  • Accelerated time to value, leveraging these rapidly deployed, flexible and scalable resources to enable enterprise application development and deployment in the cloud.
  • On-demand, pay-as-you-go model -- an alternative to a fixed-cost model. 

Links:

Amazon Reshapes Computing by Cloud

NYT has an in-depth article today on Amazon Web Services and how this pioneer of Cloud computing is reshaping the industry.

Here is the link to the article.

Monday, August 27, 2012

IBM plugs another Cloud-based software provider

IBM said today it had agreed to buy Wayne, Pa., human resources software firm Kenexa Corp. for $1.3 billion or $46 per share.

The acquisition "brings a unique combination of Cloud-based technology and consulting services that integrate both people and processes," IBM said in a statement.

Kenexa is a social network that helps companies recruit workers. The acquisition "bolsters IBM's leadership in helping clients embrace social business capabilities, while gaining actionable insights form the enormous streams of information generated from social networks every day," the statement said.
"Every company, across every business operation, is looking to tap into the power of social networking to transform the way they work, collaborate and out innovate their competitors," said Alistair Rennie, IBM's general manager for social business.

"The customer is the big winner in all this because the combination of our two organizations will deliver more business outcomes than ever before," said Kenexa Chief Executive Officer Rudy Karsan.
IBM also gains access to data related to Kenexa's 8,900 customers, which include business giants in financial services, pharmaceuticals and retail, including half of the Fortune 500 firms.

Cloud & Big Data Can Float or Sink Many

I came across this insightful article from AP today on how Cloud and Big Data can be literally the last straw to make or break HP and Dell as they float or sink in the wave of tech innovation brought upon by Apple and Google.

SAN FRANCISCO (AP) — Hewlett-Packard Co. used to be known as a place where innovative thinkers flocked to work on great ideas that opened new frontiers in technology. These days, HP is looking behind the times.

Coming off a five-year stretch of miscalculations, HP is in such desperate need of a reboot that many investors have written off its chances of a comeback.

Consider this: Since Apple Inc. shifted the direction of computing with the release of the iPhone in June 2007, HP's market value has plunged by 60 percent to $35 billion. During that time, HP has spent more than $40 billion on dozens of acquisitions that have largely turned out to be duds so far.

"Just think of all the value that they have destroyed," ISI Group analyst Brian Marshall said. "It has been a case of just horrible management."

Marshall traces the bungling to the reign of Carly Fiorina, who pushed through an acquisition of Compaq Computer a decade ago despite staunch resistance from many shareholders, including the heirs of HP's co-founders. After HP ousted Fiorina in 2005, other questionable deals and investments were made by two subsequent CEOS, Mark Hurd and Leo Apotheker.

HP hired Meg Whitman 11 months ago in the latest effort to salvage what remains of one of the most hallowed names in Silicon Valley 73 years after its start in a Palo Alto, Calif., garage.

The latest reminder of HP's ineptitude came last week when the company reported an $8.9 billion quarterly loss, the largest in the company's history. Most of the loss stemmed from an accounting charge taken to acknowledge that HP paid far too much when it bought technology consultant Electronic Data Systems for $13 billion in 2008.

HP might have been unchallenged for the ignominious title as technology's most troubled company if not for one its biggest rivals, Dell Inc.

Like HP, Dell missed the trends that have turned selling PCs into one of technology's least profitable and slowest growing niches. As a result, Dell's market value has also plummeted by 60 percent, to about $20 billion, since the iPhone's release.

That means the combined market value of HP and Dell – the two largest PC makers in the U.S. – is less than the $63 billion in revenue Apple got from iPhones and various accessories during just the past nine months.
The hand-held, touch-based computing revolution unleashed by the iPhone and Apple's 2010 introduction of the iPad isn't the only challenge facing HP and Dell.

They are also scrambling to catch up in two other rapidly growing fields – "cloud computing" and "Big Data."
Cloud computing refers to the practice of distributing software applications over high-speed Internet connections from remote data centers so that customers can used them on any device with online access. Big Data is a broad term for hardware storage and other services that help navigate the sea of information flowing in from the increasing amount of work, play, shopping and social interaction happening online.
Both HP and Dell want a piece of the action because cloud computing and Big Data boast higher margins and growth opportunities than the PC business.

It's not an impossible transition, as demonstrated by the once-slumping but now-thriving IBM Corp., a technology icon even older than HP. But IBM began its makeover during the 1990s under Louis Gerstner and went through its share of turmoil before selling its PC business to Lenovo Group in 2005. HP and Dell are now trying to emulate IBM, but they may be making their moves too late as they try to compete with IBM and Oracle Corp., as well as a crop of younger companies that focus exclusively on cloud computing or Big Data.

A revival at HP will take time, something that HP CEO Meg Whitman has repeatedly stressed during her first 11 months on the job.

"Make no mistake about it: We are still in the early stages of a turnaround," Whitman told analysts during a conference call last week.

The problems Whitman is trying to fix were inherited from Apotheker and Hurd.

HP hired Apotheker after he was dumped by his previous employer. He lasted less than a year as HP's CEO – just long enough to engineer an $11 billion acquisition of business software maker Autonomy, another poorly performing deal that is threatening to lump HP with another huge charge.

Before Apotheker, Hurd won praise for cutting costs during his five-year reign at HP, but Marshall believes HP was too slow to respond to the mobile computing, cloud computing and Big Data craze that began to unfold under Hurd's watch. HP also started its costly shopping spree while Hurd was CEO.
How much further will HP and Dell fall before they hit bottom?

HP's revenue has declined in each of the past four quarters, compared with the same period a year earlier, and analysts expect the trend to extend into next year. The most pessimistic scenarios envision HP's annual revenue falling from about $120 billion this year to $90 billion toward the end of this decade.
The latest projections for PC sales also paint a grim picture. The research firm IDC now predicts PC shipments this year will increase by less than 1 percent, down from its earlier forecast of 5 percent.

Whitman is determined to offset the crumbling revenue by trimming expenses. She already is trying to lower annual costs by $3.5 billion during the next two years, mostly by eliminating 27,000 jobs, or 8 percent of HP's work force.

Marshall expects Whitman's austerity campaign to enable HP to maintain its annual earnings at about $4 per share, excluding accounting charges, for the foreseeable future.

If HP can do that, Marshall believes the stock will turn out to be a bargain investment, even though he isn't expecting the business to grow during the next few years. The shares were trading around $17.50 Monday, near their lowest level since 2004.

One of the main reasons that Marshall still likes HP's stock at this price is because of the company's quarterly dividend of 13.2 cents per share. That translates into a dividend yield of about 3 percent, an attractive return during these times of puny interest rates.

Dell's stock looks less attractive, partly because its earnings appear to still be dropping. The company, which is based in Round Rock, Texas, signaled its weakness last week, when it lowered its earnings projection for the current fiscal year by 20 percent.

Dell executives also indicated that the company is unlikely to get a sales lift from the Oct. 26 release of Microsoft Corp.'s much-anticipated makeover of its Windows operating system. That's because Dell focuses on selling PCs to companies, which typically take a long time before they decide to switch from one version of Windows to the next generation.

Dell shares slipped to a new three-year low of $11.15 during Monday's trading.

As PC sales languish, both HP and Dell are likely to spend more on cloud computing, data storage and technology consulting.

Although those look like prudent bets now, HP and Dell probably should be spending more money trying to develop products and services that turn into "the next new thing" in three or four years, said Erik Gordon, a University of Michigan law and business professor who has been tracking the troubles of both companies.
"It's like they are both standing on the dock watching boats that have already sailed," Gordon said. "They are going to have to swim very fast just to have chance to climb back on one of the boats."

Evaluating Networking Options for HPC & Cloud

InfiniBand (IB) and High-Speed Ethernet (HSE) interconnects are generating a lot of excitement towards building next generation scientific, enterprise and cloud computing systems. OpenFabrics stack is emerging to encapsulate both IB and Ethernet in a unified manner, and hardware technologies such as Virtual Protocol Interconnect (VPI), RDMA over Converged Enhanced Ethernet (RoCE) are converging the hardware solutions.

In this video recorded at Hot Interconnects 2012 in Santa Clara last week, Jerome Vienne from Ohio State University presents: Performance Analysis and Evaluation of InfiniBand FDR and 40GigE RoCE on HPC and Cloud Computing Systems.


We evaluate various high performance interconnects over the new PCIe Gen3 interface with HPC as well as cloud computing workloads. Our comprehensive analysis, done at different levels, provides a global scope of the impact these modern interconnects have on the performance of HPC applications and cloud computing middlewares. The results of our experiments show that the latest InfiniBand FDR interconnect gives the best performance for HPC as well as cloud computing applications.”

Update:
  • 2012.08.27 - original post

Thursday, August 23, 2012

Reaching a Milestone at IBM - Senior Certification

After a 3-month journey with 50-page package, 8 references and 3 interviews, I finally crossed a mile stone at IBM today to have my package for thought leader level in actualizing IT solution approved by the review board.

Achieving this level qualifies me for IBM IT Specialist Senior Certification and Open Group Level Three IT Specialist Certification.

I want to thank all the board reviewers, mentors, coaches, reference supporters and colleagues in the last  ten years. It is a great honor to earn a check mark from IBM and I'd like to share the joy with you all!

Wednesday, August 22, 2012

Amazon Offers Up Data Archiving in Cloud

Amazon.com  recently announced a cloud storage solution from Amazon Web Services (AWS), further expanding its cloud offerings. It is interesting to note that as an archival service, the pricing model discourage frequent access with surcharge on frequency and bandwidth.

This new service is named Amazon Glacier and is a low-cost solution for data archiving, backups and other long-term storage projects where data is not accessed frequently but needs to be retained for future reference.

The cost of the service starts from one cent per gigabyte per month, with upload and retrieval requests costing five cents per thousand requests and outbound data transfer (i.e. moving data from one AWS region to another) costing 12 cents per gigabyte.

Companies usually incur significant costs for data archiving. They initially make an expensive upfront payment, after which they end up purchasing additional storage space in anticipation of growing backup demand, leading to under-utilized capacity and wasted money. With Amazon Glacier, companies will be able to keep costs in line with actual usage, allowing managers to know the exact costs of their storage systems at all times.

Cloud storage came into prominence in 2009, with Nirvanix and Amazon's Simple Storage Service (S3) being two of the major pioneers. Since then, Amazon has continued to dominate the space, with other players like Rackspace (RAX) and Microsoft (MSFT) offering their own solutions.


Tracking NGS IT Technologies

On my Smarter NGS website, I added a section today to follow and track the development of Next-Gen Sequencing technologies such as GPU, Hadoop and cluster. The first area to start is Hadoop.

 

jPage: HPC Cloud Providers

I will start a post to compile a list of HPC cloud service providers. These are vendors that provide HPC Platform as a Service such that users can sign up and run HPC workload in a public cloud without owning any infrastructure on premise.

Amazon EC2
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.  (EC2 HPC Feature)

ServedBy.net
Cloud HPC Infrastructure from ServedBy the Net can sustain the most intensive HPC applications with a predictable cost based on your needs. We manage the expensive hardware and provide ready-to-run OS templates; which saves you time, money and allows you to focus on the needs of your application.

GreenButton
GreenButton - GreenButton™ is an award winning global software company that specializes in high performance (HPC) cloud computing. The company provides a cloud platform for development and delivery of software and services that enable independent software vendors (ISVs) to move to the cloud and for their users to access cloud resources. GreenButton is Microsoft Corp’s 2011 Windows Azure ISV Partner of the Year and the company is has offices in New Zealand, Palo Alto and Seattle

ProfitBricks
ProfitBricks is the first HPC cloud provider to introduce Infiniband technology as interconnect fabric for its cloud infrastructure. This makes their system the most capable HPC system available from the Cloud.


Nimbix
Nimbix is a provider of cloud-based High Performance Computing infrastructure and applications. Nimbix offers HPC applications as a service through the Nimbix Accelerated Compute Cloud ™, dramatically speeding up data processing for Life Sciences, Oil & Gas and Rendering applications. Nimbix operates unique high performance hybrid systems and accelerated servers in its Dallas, Texas datacenter.


Update:
  • 2012.08.22 - original post
  • 2012.08.24 - added GreenButton
  • 2012.09.12 - added ProfitBricks
  • 2012.09.18 - added Nimbix

eMagazine Explores Convergence of HPC, Big Data and Cloud

The latest issue of Journey to Cloud, Intel’s cloud computing eMagazine, is hot off the presses and ready to download. In this issue, writers explore key topics like alternative solutions for big data scale-out storage, Hadoop, next-generation cloud management, cloud security, HPC on demand, and more.

You can access or download the eMagazine for free from Intel.



jTool: i2b2 for Genomic Medicine

Informatics for Integrating Biology and the Bedside (i2b2) is one of seven projects sponsored by the NIH Roadmap National Centers for Biomedical Computing (http://www.ncbcs.org).

Its mission is to provide clinical investigators with the tools necessary to integrate medical record and clinical research data in the genomics age, a software suite to construct and integrate the modern clinical research chart. i2b2 software may be used by an enterprise's research community to find sets of interesting patients from electronic patient medical record data, while preserving patient privacy through a query tool interface.

Project-specific mini-databases ("data marts") can be created from these sets to make highly detailed data available on these specific patients to the investigators on the i2b2 platform, as reviewed and restricted by the Institutional Review Board.

The current version of this software has been released into the public domain.

Published use cases:
Links:

Update:
  • original post: 2012.08.22

Wednesday, August 15, 2012

Running HPC Workload on Cloud - A Real-life Case Study

Running high-performance computing (HPC) workload in public cloud has been an interesting yet challenging issue for many researchers and research institutes. The aspect of having large computational power on the finger tip with moment's notice is alluring yet the potential pitfall of poor performance and high cost can be scary.

I came upon a thoughtful and insightful blog post from "Astronomy Computing Today". It presented a research by Ewa Deelman of USC and G. Bruce Berriman of CaTech on the interesting topic of running HPC workload on Cloud. In this case, running astronomical applications on Amazon EC2 HPC Cloud. 

The conclusion of the research are two folds: 1) the cost-performance to run on Cloud depends on application requirement and needs to be carefully evaluated; 2) for now, avoid workload that demands mass storage as this is still the most expensive aspect of running on public cloud such as EC2.

You can access the post here.

Update:
  • 2012.08.15: original post

Monday, August 13, 2012

Bigger, Faster and Cheaper - PetaStore Case Study

In 2010, I architected the Petascale Active Archive solution for University of Oklahoma which was implemented as the PetaStore system in 2011. It is a combined disk-tape active archive solution. The system was in production for about a year and a case study was published by IBM today.

Since 1890, the University of Oklahoma (OU) has provided higher-level education and valuable research through its academic programs. With involvement in science, technology, engineering and mathematics, the university has increased its focus on high performance computing (HPC) to support data-centric research. In service of OU’s education and research mission, the OU Supercomputing Center for Education & Research (OSCER), a division of OU Information Technology, provides support for research projects, providing an HPC infrastructure for the university.
 
Rapid data growth in academic research
 
In a worldwide trend that spans the full spectrum of academic research, explosive data growth has directly affected university research programs. Including a diverse range of data sources, from gene sequencing to astronomy, datasets have rapidly grown to, in some cases, multiple petabytes (millions of gigabytes).
 
One ongoing research project that produces massive amounts of data is conducted by OU’s Center for Analysis and Prediction of Storms. Each year, this project becomes one of the world’s largest storm forecasting endeavors, frequently producing terabytes of data per day. Much of this real-time data is shared with professional forecasters, but a large amount is stored for later analysis. Long-term storage of this data holds strong scientific value, and in many cases, is required by research funding agencies. Understandably, storage space had become a major issue for the university.
 
Need for an onsite storage system
 
In the past, for projects like storm forecasting, OU did not have the capability to store large amounts of data on campus—much of the data had to be stored offsite at national supercomputing centers. This not only created issues for performance and management at the university, it also forced researchers to reduce the amounts of data for offsite storage, creating potential for loss of information that could be valuable for future analysis.
 
Henry Neeman, director of OSCER, realized that to continue supporting many of the university’s research projects—and to retain funding—OU would need a large scale archival storage system that enabled long term data storage while containing costs for deployment and operations.
 
With a clear vision for the new storage system, OU began reviewing bids from multiple vendors. Neeman noticed that while most proposed solutions were technically capable, the IBM solution was able to meet technical requirements and stay within budget. Ultimately, it offered the best value to the university and would go on to establish a powerful new business model for storage of research data.
 
High-capacity, cost-effective data archive
 
Implementing a combination of disk- and tape-based storage, OU was able to establish a storage system known as the Oklahoma PetaStore, which is capable of handling petabytes (PB) of data. For high-capacity disk storage, the IBM System Storage DCS9900 was selected—which is scalable up to 1.7 PB. For longer-term data storage, OU chose the System Storage TS3500 Tape Library—with an initial capacity up to 4.3 PB and expandable to over 60 PB. To run these storage systems, six IBM System x3650 class servers were selected, running IBM General Parallel File System (GPFS™) on the disk system and IBM Tivoli Storage Manager on the tape library to automatically move or copy data to tape.
 
Neeman says one of the main reasons they chose IBM was the cost effectiveness of the tape solution. Unlike the TS3500 and Tivoli Storage Manager, many other tape solutions impose additional cost, such as tape cartridge slot activation upcharges and per-capacity software upcharges—demands that could be prohibitive to researchers. The TS3500 Tape library offers a flexible upgrade path, enabling users to easily and affordably expand the initial capacity. These savings even enabled OU to implement a mechanism to access and manage backup data through extensible interfaces. OU has adopted an innovative business model under which storage costs are shared among stakeholders. In this model, a grant from the National Science Foundation pays for the hardware, software and initial support; OU covers the space, power, cooling, labor and longer-term support costs; and the researchers purchase storage media (tape cartridges and disk drives) to archive their datasets, which OSCER deploys and maintains without usage upcharges.
 
Storage that impresses on many levels
 
The PetaStore provides research teams with a hugely expandable archive system, allowing data to be stored through several duplication policy choices that are set by the researchers. The connectivity capabilities allow data to be accessible not only to the university, but to other institutions and collaborators.
 
Although capacity was more of a priority than speed when designing the PetaStore, this IBM solution has shown strong performance, with tape drives operating close to peak speed. Another key benefit to the solution is its cost-effectiveness—not only for hardware, but for the reduction of labor costs for the researchers. These benefits have been noticed by Neeman, who says, “Without the PetaStore, several very large scale, data-centric research projects would be considerably more difficult, time consuming and expensive to undertake—some of them so much so as to be impractical.”
 
Continued innovation with IBM
 
By choosing the IBM solution for the PetaStore project, the University of Oklahoma has ensured a future of continued innovation in academic research. The system not only facilitates storage for the entire lifecycle of research data, it ensures that the PetaStore can continue operating and expanding at very low cost. This is critical for the university to continue to receive funding—the solution’s built-in cost efficiency proves to research funding agencies that the university can continue to operate the storage system within budget.

Overall, the university and research teams have seen numerous advantages to the IBM solution, and plan for it to seamlessly expand along with their storage needs. According to Neeman, "We only needed three things: bigger, faster and cheaper," and the IBM solution was able to deliver on all fronts. Neeman predicts that data storage solutions like the Oklahoma PetaStore will become increasingly common at research institutions across the country and worldwide.

Source: 

Wednesday, August 8, 2012

Finding New Explosive Function of H2O - Computationally

The most abundant material on Earth exhibits some unusual chemical properties when placed under extreme conditions.

Laboratory scientists have shown that water, in hot dense environments, plays an unexpected role in catalyzing complex explosive reactions. A catalyst is a compound that speeds chemical reactions without being consumed. Platinum and enzymes are common catalysts. But water rarely, if ever, acts as a chemical catalyst under ordinary conditions.

Detonations of high explosives made up of oxygen and hydrogen produce water at thousands of degrees Kelvin and up to 100,000 atmospheres of pressure, similar to conditions in the interiors of giant planets.
While the properties of pure water at high pressures and temperatures have been studied for years, this extreme water in a reactive environment has never been studied. Until now.

Using first-principle atomistic simulations of the detonation of the high explosive PETN (pentaerythritol tetranitrate), the Livermore team discovered that in water, when one hydrogen atom serves as a reducer and the hydroxide (OH) serves as an oxidizer, the atoms act as a dynamic team that transport oxygen between reaction centers.

“This was news to us,” said lead researcher Christine Wu. “This suggests that water also may catalyze reactions in other explosives and in planetary interiors.”

This finding is contrary to the current view that water is simply a stable detonation product.

“Under extreme conditions, water is chemically peculiar because of its frequent dissociations,” Wu said. “As you compress it to the conditions you’d find in the interior of a planet, the hydrogen of a water molecule starts to move around very fast.”

In molecular dynamic simulations using the Lab’s BlueGene L supercomputer, Wu and colleagues Larry Fried, Lin Yang, Nir Goldman and Sorin Bastea found that the hydrogen (H) atoms and hydroxide (OH) molecules in water transport oxygen from nitrogen storage to carbon fuel under PETN detonation conditions (temperatures between 3,000 and 4,200 Kelvin). Under both temperature conditions, this “extreme water” served both as an end product and as a key chemical catalyst. 

For a molecular high explosive that is made up of carbon, nitrogen, oxygen and hydrogen, such as PETN, the three major gaseous products are water, carbon dioxide and molecular nitrogen.

But to date, the chemical processes leading to these stable compounds are not well understood.

The team found that nitrogen loses its oxygen mostly to hydrogen, not to carbon, even after the concentration of water reaches equilibrium. They also found that carbon atoms capture oxygen mostly from hydroxide, rather than directly from nitrogen monoxide (NO) or nitrogen dioxide (NO2). Meanwhile, water disassociates and recombines with hydrogen and hydroxide frequently. 

“The water that comes out is part of the energy release mechanism,” Wu said. “This catalytic mechanism is completely different from previously proposed decomposition mechanisms for PETN or similar explosives, in which water is just an end product. This new discovery could have implications for scientists studying the interiors of Uranus and Neptune where water is in an extreme form.”

The research appears in the premier issue (April 2009) of the new journal, Nature Chemistry

Source: https://newsline.llnl.gov/_rev02/articles/2009/mar/03.27.09-water.php

 

'Extreme-Scale' Computing is Serious Business

Exascale computing is still far out on horizon but investment and efforts are being made by DoE to take the nation on the right path. The undertaking is not just serious business for the national labs that received funding, but also a critical matter for our country's economic competitiveness.

To out-compete, we need to out-compute our competition.

DOE Awards $62 Million for 'Extreme-Scale' Computing

LIVERMORE, Calif., Aug. 6 -- The U.S. Department of Energy's Lawrence Livermore National Laboratory issued the following news release:

Under an initiative called FastForward, the Department of Energy (DOE), Office of Science and the National Nuclear Security Administration (NNSA) have awarded $62 million in research and development (R&D) contracts to five leading companies in high performance computing to accelerate the development of next-generation supercomputers vital to national defense, scientific research, energy security, and the nation's economic competitiveness.

AMD, IBM, Intel, Nvidia and Whamcloud received awards to advance "extreme scale" computing technology with the goal of funding innovative R&D of critical technologies needed to deliver next generation capabilities within a reasonable energy footprint. DOE missions require exascale systems that operate at quintillions of floating point operations per second. Such systems would be 1,000 times faster than a 1-petaflop (quadrillion floating point operations per second) supercomputer. Currently, the world's fastest supercomputer -- the IBM BlueGene/Q Sequoia system at Lawrence Livermore National Laboratory (LLNL) -- clocks in at 16.3 petaflops.

"The challenge is to deliver 1,000 times the performance of today's computers with only a fraction more of the system's energy consumption and space requirements," said William Harrod, division director of research in DOE Office of Science's Advanced Scientific Computing Research program.

Contract awards were in three high performance computing (HPC) technology areas: processors, memory, and storage and input/output (I/O) -- the communication between computer processing systems and outside networks. The total value of the contracts is $62.5 million and covers a two-year period of performance.
The FastForward program, funded by DOE's Office of Science and NNSA, is managed by LLNL on behalf of seven national laboratories including: Lawrence Berkeley, Los Alamos, Sandia, Oak Ridge, Argonne and Pacific Northwest. Technical experts from the participating national laboratories evaluated and helped select the proposals and will work with selected vendors on co-design.

"Exascale computing will be required to fully assess the performance of our nation's nuclear stockpile in all foreseeable situations without returning to nuclear testing," said Bob Meisner, head of NNSA's Advanced Simulation and Computing (ASC) program. "The insight that comes from simulations is also vital to addressing nonproliferation and counterterrorism issues, as well as informing other national security decisions."

The FastForward initiative is intended to speed up and influence the development of technologies companies are pursuing for commercialization to ensure these products include features DOE Science and NNSA laboratories require for research.

"Recognizing that the broader computing market will drive innovation in a direction that may not meet DOE mission needs in national security and science, we need to ensure that exascale systems will meet the extreme requirements in computation, data movement and reliability that DOE applications require," Harrod said.
Under the contract awards, AMD is working on processors and memory for extreme systems, IBM also is working on memory for extreme systems, Intel Federal is working on energy efficient processors and memory architectures, Nvidia is working on processor architecture for exascale computing at low power and Whamcloud is leading a group working on storage and I/O.

In an era of increasing global competition in HPC, the development of exascale computing capabilities is widely seen as a key to sustaining the innovation edge in the science and technology that underpin national and economic security.

DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, provides open scientific user facilities -- including some of the world's most powerful supercomputers -- as a resource for the nation, and is working to address some of the most pressing challenges of our time.

Source: Lawrence Livermore National Laboratory