It appears you have not yet registered with our community. To register please click here...

 
Go Back [M] > Madshrimps > WebNews
Internet Speed Quadrupled by International Team During 2004 Bandwidth Challenge Internet Speed Quadrupled by International Team During 2004 Bandwidth Challenge
FAQ Members List Calendar Search Today's Posts Mark Forums Read


Internet Speed Quadrupled by International Team During 2004 Bandwidth Challenge
Reply
 
Thread Tools
Old 29th November 2004, 16:06   #1
Madshrimp
 
jmke's Avatar
 
Join Date: May 2002
Location: 7090/Belgium
Posts: 79,021
jmke has disabled reputation
Default Internet Speed Quadrupled by International Team During 2004 Bandwidth Challenge

PITTSBURGH, Pa.--For the second consecutive year, the "High Energy Physics" team of physicists, computer scientists, and network engineers have won the Supercomputing Bandwidth Challenge with a sustained data transfer of 101 gigabits per second (Gbps) between Pittsburgh and Los Angeles. This is more than four times faster than last year's record of 23.2 gigabits per second, which was set by the same team.

The team hopes this new demonstration will encourage scientists and engineers in many sectors of society to develop and deploy a new generation of revolutionary Internet applications.

The international team is led by the California Institute of Technology and includes as partners the Stanford Linear Accelerator Center (SLAC), Fermilab, CERN, the University of Florida, the University of Manchester, University College London (UCL) and the organization UKLight, Rio de Janeiro State University (UERJ), the state universities of São Paulo (USP and UNESP), the Kyungpook National University, and the Korea Institute of Science and Technology Information (KISTI). The group's "High-Speed TeraByte Transfers for Physics" record data transfer speed is equivalent to downloading three full DVD movies per second, or transmitting all of the content of the Library of Congress in 15 minutes, and it corresponds to approximately 5% of the rate that all forms of digital content were produced on Earth during the test.

The new mark, according to Bandwidth Challenge (BWC) sponsor Wesley Kaplow, vice president of engineering and operations for Qwest Government Services exceeded the sum of all the throughput marks submitted in the present and previous years by other BWC entrants. The extraordinary achieved bandwidth was made possible in part through the use of the FAST TCP protocol developed by Professor Steven Low and his Caltech Netlab team. It was achieved through the use of seven 10 Gbps links to Cisco 7600 and 6500 series switch-routers provided by Cisco Systems at the Caltech Center for Advanced Computing (CACR) booth, and three 10 Gbps links to the SLAC/Fermilab booth. The external network connections included four dedicated wavelengths of National LambdaRail, between the SC2004 show floor in Pittsburgh and Los Angeles (two waves), Chicago, and Jacksonville, as well as three 10 Gbps connections across the Scinet network infrastructure at SC2004 with Qwest-provided wavelengths to the Internet2 Abilene Network (two 10 Gbps links), the TeraGrid (three 10 Gbps links) and ESnet. 10 gigabit ethernet (10 GbE) interfaces provided by S2io were used on servers running FAST at the Caltech/CACR booth, and interfaces from Chelsio equipped with transport offload engines (TOE) running standard TCP were used at the SLAC/FNAL booth. During the test, the network links over both the Abilene and National Lambda Rail networks were shown to operate successfully at up to 99 percent of full capacity.

The Bandwidth Challenge allowed the scientists and engineers involved to preview the globally distributed grid system that is now being developed in the US and Europe in preparation for the next generation of high-energy physics experiments at CERN's Large Hadron Collider (LHC), scheduled to begin operation in 2007. Physicists at the LHC will search for the Higgs particles thought to be responsible for mass in the universe and for supersymmetry and other fundamentally new phenomena bearing on the nature of matter and spacetime, in an energy range made accessible by the LHC for the first time.

The largest physics collaborations at the LHC, the Compact Muon Solenoid (CMS), and the Toroidal Large Hadron Collider Apparatus (ATLAS), each encompass more than 2000 physicists and engineers from 160 universities and laboratories spread around the globe. In order to fully exploit the potential for scientific discoveries, many petabytes of data will have to be processed, distributed, and analyzed. The key to discovery is the analysis phase, where individual physicists and small groups repeatedly access, and sometimes extract and transport, terabyte-scale data samples on demand, in order to optimally select the rare "signals" of new physics from potentially overwhelming "backgrounds" from already-understood particle interactions. This data will be drawn from major facilities at CERN in Switzerland, at Fermilab and the Brookhaven lab in the U.S., and at other laboratories and computing centers around the world, where the accumulated stored data will amount to many tens of petabytes in the early years of LHC operation, rising to the exabyte range within the coming decade.

Future optical networks, incorporating multiple 10 Gbps links, are the foundation of the grid system that will drive the scientific discoveries. A "hybrid" network integrating both traditional switching and routing of packets, and dynamically constructed optical paths to support the largest data flows, is a central part of the near-term future vision that the scientific community has adopted to meet the challenges of data intensive science in many fields. By demonstrating that many 10 Gbps wavelengths can be used efficiently over continental and transoceanic distances (often in both directions simultaneously), the high-energy physics team showed that this vision of a worldwide dynamic grid supporting many-terabyte and larger data transactions is practical.

While the SC2004 100+ Gbps demonstration required a major effort by the teams involved and their sponsors, in partnership with major research and education network organizations in the United States, Europe, Latin America, and Asia Pacific, it is expected that networking on this scale in support of largest science projects (such as the LHC) will be commonplace within the next three to five years.

The network has been deployed through exceptional support by Cisco Systems, Hewlett Packard, Newisys, S2io, Chelsio, Sun Microsystems, and Boston Ltd., as well as the staffs of National LambdaRail, Qwest, the Internet2 Abilene Network, the Consortium for Education Network Initiatives in California (CENIC), ESnet, the TeraGrid, the AmericasPATH network (AMPATH), the National Education and Research Network of Brazil (RNP) and the GIGA project, as well as ANSP/FAPESP in Brazil, KAIST in Korea, UKERNA in the UK, and the Starlight international peering point in Chicago. The international connections included the LHCNet OC-192 link between Chicago and CERN at Geneva, the CHEPREO OC-48 link between Abilene (Atlanta), Florida International University in Miami, and São Paulo, as well as an OC-12 link between Rio de Janeiro, Madrid, Géant, and Abilene (New York). The APII-TransPAC links to Korea also were used with good occupancy. The throughputs to and from Latin America and Korea represented a significant step up in scale, which the team members hope will be the beginning of a trend toward the widespread use of 10 Gbps-scale network links on DWDM optical networks interlinking different world regions in support of science by the time the LHC begins operation in 2007. The demonstration and the developments leading up to it were made possible through the strong support of the U.S. Department of Energy and the National Science Foundation, in cooperation with the agencies of the international partners.

As part of the demonstration, a distributed analysis of simulated LHC physics data was done using the Grid-enabled Analysis Environment (GAE), developed at Caltech for the LHC and many other major particle physics experiments, as part of the Particle Physics Data Grid, the Grid Physics Network and the International Virtual Data Grid Laboratory (GriPhyN/iVDGL), and Open Science Grid projects. This involved the transfer of data to CERN, Florida, Fermilab, Caltech, UC San Diego, and Brazil for processing by clusters of computers, and finally aggregating the results back to the show floor to create a dynamic visual display of quantities of interest to the physicists. In another part of the demonstration, file servers at the SLAC/FNAL booth in London and Manchester also were used for disk-to-disk transfers from Pittsburgh to England. This gave physicists valuable experience in the use of the large, distributed datasets and to the computational resources connected by fast networks, on the scale required at the start of the LHC physics program.

The team used the MonALISA (MONitoring Agents using a Large Integrated Services Architecture) system developed at Caltech to monitor and display the real-time data for all the network links used in the demonstration. MonALISA (http://monalisa.caltech.edu) is a highly scalable set of autonomous, self-describing, agent-based subsystems which are able to collaborate and cooperate in performing a wide range of monitoring tasks for networks and grid systems as well as the scientific applications themselves. Detailed results for the network traffic on all the links used are available at http://boson.cacr.caltech.edu:8888/.

Multi-gigabit/second end-to-end network performance will lead to new models for how research and business is performed. Scientists will be empowered to form virtual organizations on a planetary scale, sharing in a flexible way their collective computing and data resources. In particular, this is vital for projects on the frontiers of science and engineering, in "data intensive" fields such as particle physics, astronomy, bioinformatics, global climate modeling, geosciences, fusion, and neutron science.

Harvey Newman, professor of physics at Caltech and head of the team, said, "This is a breakthrough for the development of global networks and grids, as well as inter-regional cooperation in science projects at the high-energy frontier. We demonstrated that multiple links of various bandwidths, up to the 10 gigabit-per-second range, can be used effectively over long distances.

"This is a common theme that will drive many fields of data-intensive science, where the network needs are foreseen to rise from tens of gigabits per second to the terabit-per-second range within the next five to 10 years," Newman continued. "In a broader sense, this demonstration paves the way for more flexible, efficient sharing of data and collaborative work by scientists in many countries, which could be a key factor enabling the next round of physics discoveries at the high energy frontier. There are also profound implications for how we could integrate information sharing and on-demand audiovisual collaboration in our daily lives, with a scale and quality previously unimaginable."

Les Cottrell, assistant director of SLAC's computer services, said: "The smooth interworking of 10GE interfaces from multiple vendors, the ability to successfully fill 10 gigabit-per-second paths both on local area networks (LANs), cross-country and intercontinentally, the ability to transmit greater than 10Gbits/second from a single host, and the ability of TCP offload engines (TOE) to reduce CPU utilization, all illustrate the emerging maturity of the 10Gigabit/second Ethernet market. The current limitations are not in the network but rather in the servers at the ends of the links, and their buses."

Further technical information about the demonstration may be found at http://ultralight.caltech.edu/sc2004 and http://www-iepm.slac.stanford.edu/mo...04/hiperf.html A longer version of the release including information on the participating organizations may be found at http://ultralight.caltech.edu/sc2004/BandwidthRecord
__________________
jmke is offline   Reply With Quote
Reply


Similar Threads
Thread Thread Starter Forum Replies Last Post
UK Team Reports on MSI's Overclocking Challenge jmke WebNews 0 29th August 2008 19:08
Internet Speed Test jmke General Madness - System Building Advice 16 24th July 2008 14:22
Chip may speed up internet 100 times jmke WebNews 0 10th July 2008 16:39
Astak Team Research PC4800 DDR-600 High Speed Memory Sidney WebNews 0 7th March 2005 22:16
Ibm Reports 2004 Third-quarter Results jmke WebNews 1 20th October 2004 22:15
Intel Outlines Strategy for Making the Internet Smarter, Safer, More Reliable Sidney WebNews 0 9th September 2004 18:38
Geil PC3200 Ultra X: High Speed & Record Bandwidth jmke WebNews 0 21st August 2004 19:32
List of fixes included in Windows XP Service Pack 2 jmke WebNews 1 17th August 2004 16:03
Unreal Tournament 2004 Goes Gold jmke WebNews 0 4th March 2004 21:13

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off


All times are GMT +1. The time now is 21:30.


Powered by vBulletin® - Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO