CUDA 강의
홈 > CUDA > CUDA 강의 > GTC2009 블로그

GTC2009 블로그

다음은 2009년 9월 산호세에서 개최된 GTC2009에 대한 블로그 포스팅입니다. 

동영상의 경우 로딩하는데 시간이 걸릴 수 있습니다.  모든 정보는

NVIDIA의 GTC2009 홈페이지에서 확인가능합니다. http://www.nvidia.com/object/gpu_technology_conference.html 

 

 

10/02/2009 Day Three Wrap-Up of GPU Tech Conference With Ujesh Desai
By NVIDIA, posted Oct 2 2009 at 02:52:13 PM

Ujesh Desai, NVIDIA VP of Marketing wraps up both Day 3 and the entire action packed GPU Technology Conference.

 

Learnings from Jen-Hsun, Fireside chat with John Peddie
By Michael Diamond, posted Oct 2 2009 at 01:47:44 PM

The GTC attendees were treated today to a non-scripted, intimate fireside chat with Jen-Hsun today, with John Peddie as the moderator.


 

It was a candid close up affair, with John asking Jen-Hsun questions on behalf of the sixty attending start-ups companies, and other guests.


 

Here are a few of the perspectives that resonated with me, that I want to share with you. 

  • Your greatest single asset is your enthusiasm.
  • Do exquisite work.
  • Everything that you need to know, when you get down to it and examine its core, you learned as a child.
  • What you are about is different than how you make money.
  • The most important things about your business are your employees, your customers, and your shareholders.
  • Separate the game from the score, do the best you can for the right reasons and work on the important things that the world needs.
  • When you know what the answer is, when you feel it in your gut, and you have the wherewithal to do it, you don’t want to look back in ten years and regret that you did not.
  • Create amazing things that people want to buy from you.
  • How NVIDIA makes decisions about what kinds of business to pursue: Has it been done already, is it hard to do, can only you do it, how are you going to make money at it (you might not know right off the bat).
  • When you increase computing performance by one thousand times, a dislocation of value will occur, and the industry will change, and amazing things will happen. Every 15 years something dramatic happens and the industry will look completely different.

All the best to everyone. I look forward to being with you later on NVIDIA’s blog, and at next year’s GPU Technology conference ;-) 

-Michael

 

GPU Technology Conference 3rd Day Keynote
By NVIDIA, posted Oct 2 2009 at 12:07:45 PM

This video was taken of attendees as they exited the 3rd day keynote at the GPU Technology Conference held from Sept. 30 to Oct. 2, 2009 in San Jose, California. The keynote featured Lucasfilm's CTO Richard Kerris and Chris Horvath, a digital artist at Industrial Light & Magic.

 

Jen-Hsun Keynote Interview Part II: Being the CEO...
By Chris Kraeuter, posted Oct 2 2009 at 11:53:50 AM

Jen-Hsun was 30 when he became CEO of NVIDIA. Now 46, he reflected on some of the lessons he's learned and how it applies to the attendees here, more than 400 of whom are participating in the Emerging Companies Summit part of the GPU Tech Conference. 

3974383019_8614ee9f7b_o

"You can't possibly imagine the amount of things you should learn, you could learn to be a better CEO," he said, running down an extensive list of considerations he's  constantly faced with. "Amid all of that chaos, what do you do? The answer is very simple -- I always go back to first principles. Almost everything I've learned, I've known since I was a kid."

He said emerging companies are currently facing three issues: 1) Staying alive, 2) Optimizing financial performance, and 3) Making sure they believe in what they are doing. "If you believe in it and you can afford to invest in, then you keep going."

For NVIDIA, Jen-Hsun drew a distinction between his company and the competition. "Our most important thing is to try not to be like them." When he's considering priorities for the company and investment opportunities, he asks if it's been done before, it's hard and if NVIDIA is uniquely capable of stepping forward. "If you answer those three questions as yes, then, by God, get to it." Also during the Friday discussion, Jen-Hsun predicted a coming era of technological revolution on par with what happened in the middle of last decade after Microsoft introduced Windows 95. "This is the same feeling as 1995."

"The personal computer will surely become the DVD player of the 21st century. No one falls in love with their DVD player," he said. He sees two classes of hardware coming: Free hardware attached to subscription services and then specialized hardware purchased for their "astounding capabilities."

It's all in the future for us now. While this conference has another half day of sessions and technical talks ahead, NVIDIA did give attendees something to look forward to by announcing that they intend to hold this conference again next year. Indeed, there's a lot to look forward to.  

 

Jen-Hsun Keynote Interview Part I: Investing in a Downturn...
By Chris Kraeuter, posted Oct 2 2009 at 11:52:24 AM

The tumult during the past year caused Jen-Hsun Huang to re-prioritize, much like everyone else. But, unlike everyone else, NVIDIA continued to invest, increasing R&D investment and hiring more content technology people than ever before.

"We did exactly the right thing for exactly the right reasons," he said, even acknowledging that the company's financial performance was not anywhere close to what he hoped it would be. "This is about how to separate the game from the score."

During a discussion with Jon Peddie, Jen-Hsun cited a slew of new initiatives from the past 12 months demonstrating that NVIDIA knows where technology is headed and that it's making the right investments, from 3D Vision to CUDA to OpenCL to DirectCompute to PhysX. He also touched on a previous company called Keyhole Corporation that NVIDIA invested in that allowed people to type in an address and then flew them anywhere in the world to an image of the place. That company became Google Earth.


 

"Our purpose is to help cultivate and help inspire other companies' endeavors to create what likely will be one of the most important ecosystems of the world, the beacon that will take computing to the next level," he said. 

 

From Yoda to Dumbledore with Lucasfilm's Richard Kerris
By Chris Kraeuter, posted Oct 2 2009 at 10:50:23 AM

Richard Kerris brought the legendary clips from the likes of "Star Wars" and "Star Trek", he brought new clips of movies not out for another year (M. Night Shyamalan's "The Last Airbender"), and he enlightened everyone on how Lucasfilm is changing its production approach thanks to the increased power of GPUs.


 

"It's all about the iterative approach," he said, mentioning that tapping into increased computing power helps his team refine their work in a real-time environment, to the benefit of everyone involved in a project. "Our goal is to use the GPU where ever possible to speed up simulations so our techniques can be improved upon and we can work almost interactively."

From the beginnings of Lucasfilm in 1975 with the use of the Dykstraflex camera on "Star Wars" to show the Millennium Falcon in flight to the computer generated 150 foot-wave created for the "Perfect Storm" in 2000 to the swirling fire scene in this year's "Harry Potter and the Half Blood Prince," the ability to achieve a more realistic (or fantastic) effect is apparent every time Richard and his team start a new project or the rest of us attend the theater to see their work.

Chris Horvath, a digital artist at Lucasfilm, said the parallel programming of GPUs mimics how our brains operate and that this is responsible for the increasingly stunning visuals seen in movies and games today. "GPU programming is difficult and represents a change of mindset, but this is how the world really works."

To that end, Lucasfilm has started constructing a GPU farm to better harness the power of NVIDIA processors. The San Francisco-based company, which operates a host of divisions (Industrial Light & Magic, Skywalker Sound, LucasArts, Lucasfilm Animation, Lufasfilm Animation Singapore and Lucas Online), brought 50 people to this week's conference.

 

Final Day of the Show and Plenty Going On
By Chris Kraeuter, posted Oct 2 2009 at 08:52:28 AM

Final day of the show, but still plenty happening. We’ve got an early wake-up with Lucasfilm CTO Richard Kerris. I’m hoping for some good movie clips – past, present and future. That will be followed by a much-anticipated 90-minute fireside chat between Jon Peddie and Jen-Hsun Huang. Nvidia’s Jeff Herbst will then lead a panel discussion about raising capital in the current economic climate, featuring Sutter Hill Ventures, Silicon Valley Bank, and Deloitte.

 

10/01/2009 Day Two Wrap-Up of GPU Tech Conference, with Rob Csongor, VP of Corporate Marketing
By NVIDIA, posted Oct 1 2009 at 07:29:59 PM

Listen to a quick video summary of Day Two excitement at the GPU Tech Conference, hosted by Rob Csongor, VP of Corporate Marketing.

Among the subjects Rob covers: Harvard’s Hanspeter Pfister’s fascinating talk on how GPUs are being used to answer some of science’s thorniest problems – like how the brain is wired and how the universe began; the kick off of the Emerging Companies Summit, with 60 startups using our processors in a whole new range of ways; a talk by Rudy Sarzo former bassist for Blue Oyster Cult and Ozzy Osbourne, about how musicians are using GPUs for special effects; and a unique initiative to raise funds for a local charity.

 

GPU Unleashed
By Michael Diamond, posted Oct 1 2009 at 05:06:00 PM

Hi everyone ;) I am blogging from the GPU Technology Conference Emerging Companies Summit (ECS). It is a unique forum for startup companies to showcase innovative applications that leverage the GPU to solve visual and high-performance computing problems. Here is an illustration of the GPU ecosystem, where NVIDIA strives to educate, encourage, and inspire companies that embrace the transformative power of GPUs into their business. 

DSC02608

The stars of ECS 2009 are the 60 companies here to present. Sorry we did not have enough room for all those companies that applied, we will try to have more room next year. 


 

You can only stretch the old way of doing something so far before you hit the wall; it’s like trying to cover the maximum distance in the shortest time with the world’s fastest car. You are better off using the right tool for the right job. A high performance aircraft can easily outperform a rocket car. 


 

And so it is for GPU parallel compute vs. CPU single thread compute. That is why the Oak Ridge Supercomputer Targets NVIDIA GPU Computing Technology to Achieve Order of Magnitude Performance Over Today’s Fastest Supercomputer There were some cool demos at the ECS keynote. 

The first was by Viewdle, with some really fast face video recognition technology. 


 

Then Edge 3 Technologies demoed their gesture recognition and advance machine learning tech, using commodity webcams and some CUDA optimized code, to make it run fast, and only 3% of the CPU cycles were needed. 


 

And my favorite was from MirriAd, where they can punch in advertisement into post rendered video feeds, integrating them seamlessly as if it was part of the original. 


 

Watch for these guys, there is $$$$ to be made here

 

Off to the Races
By Chris Kraeuter, posted Oct 1 2009 at 05:02:08 PM

Jeffrey Vetter of Oak Ridge National Laboratory spoke again, this time during the Supercomputing Super Session, taking a step back from some of his previous comments here to look at the bigger picture of why predictive simulation is important to scientific discovery. This is, after all, what all this big iron is being put into play. Vetter noted that areas as diverse as combustion science and climate modeling and astrophysics and fusion all now rely on simulations "as a capability to advance discovery." 

Further, researchers are combining models to ask layered questions, such as pairing up climate models with energy economics or demographic shifts.  

He also touched a bit on Keeneland, the new NSF-funded partnership detailed fully yesterday that will be Fermi enabled. The project is indeed named after the race track in Kentucky and will initially open up in Spring 2010, ramping up in 2012 to three times its initial size.  

Wen-mei Hwu dished a little on Blue Waters, the massive supercomputer being developed with his institution, the University of Illinois at Urbana-Champaign, and IBM and the National Center for Supercomputing Applications. It will become operational in 2011 and will have more than 200,000 cores, more than 800TB of memory and 10 PB of disk storage. Interestingly, the system itself will take up less than half of the new 90,000 square foot facility.

He also encouraged people to check out cuda-research.org, just launched on Wednesday. “This is really designed for people to exchange resources and collaborate,” he said. 

A colleague of Wen-mei's also spoke. James Phillips is a senior research programmer at U of I and lead developer for the NAMD parallel molecular dynamics program. "Our goal is to provide practical supercomputing," he said. NAMD targets a biology or biophysics audiences and that 33,000 users have downloaded one or more versions of the software. 
 

A World Without GPUs
By Chris Kraeuter, posted Oct 1 2009 at 04:41:32 PM

Attendees and speakers here have spent much of the past two days envisioning what the future holds for a world that increasingly draws on the power of parallel processing, but some speakers today also pondered a world without GPUs.

"We'd still be cutting tape," said Simon Hayhurst, senior director of product development for Adobe, speaking about the film industry and how regressed the editing and special effects footage would be. "Parallelization and that model of data flow changes everything. You have better tools for the art of storytelling." 

Bill Dally, chief scientist at Nvidia, who joined the company in January, said developers require the creative headroom that GPUs provide now that CPUs aren't scaling anymore. "Innovation in third-party software is fueled by having more cycles."

And CEO Steve Perlman at on-demand gaming company OnLive backed this up by pointing out that the business case and market case for expanding capabilities brought us all to increasingly rely on the GPU instead of the CPU. 

At Motion DSP, the benefits of leveraging GPUs are tangible. CEO Sean Varah said they wouldn't have been able to handle real-time rendering of video without GPUs. "Customers said give us this in real-time and GPUs brought us over the real-time barrier and opened up a new market for us." 

 

Modified Reflectivity, Anti-Aliasing and Ray Payloads...
By Chris Kraeuter, posted Oct 1 2009 at 03:51:24 PM

Steven Parker definitely had the most technical of the talks I’ve heard so far (I know, I’m a lightweight), covering ray tracing and the OptiX. He's the research scientist leading the OptiX development team and a ray tracing veteran of more than a decade.

The technical aspects of the talk, of course, flew right beyond my pea brain (come on, he's talking about rasterize requests, depositing things in buffers, Lambertian shaders), but it was definitely interesting to see how a simple box on a plane progressed through increasing levels of complexity: opaque shadows then true gradated shadows, then reflections and then reflections with depth, and then rusted metal and then finally another object was placed next to the original cube, changing, of course, all the lighting and shadows of everything around it. 

Steven gamely walked the audience through the programming steps, finally noting that this was the same sort of program used in the keynote yesterday covering the stunningly lifelike Bugatti, all of which took only 150 lines of code. But that's not to say it was easy. The rendering of the Bugatti yesterday consisted of a full Monte Carlo simulation for billions of paths of light and 2 million polygons. Very cool.

 

Nexus: A powerful IDE for GPU Computing on Windows (Session 1023)
By NVIDIA, posted Oct 1 2009 at 03:32:44 PM

This video was taken of attendees as they exited from a session on ‘Nexus: A Powerful IDE for GPU Computing on Windows’ (Session 1023) at the GPU Technology Conference held from Sept. 30 to Oct. 2, 2009 in San Jose, California.

 

The Power of Parallel Computing Isn't Just for Big Corporations or Labs.
By Chris Kraeuter, posted Oct 1 2009 at 02:03:34 PM

More than 60 companies from 15 countries are participating in the second annual Emerging Companies Summit here at NVIDIA's GPU Tech Conference.      

The chief execs from three of those emerging companies demonstrated how they're leveraging graphics processing units (GPUs) to build businesses that they said weren't possible previously.   

"The beauty of the GPU is that it is enabling these algorithms which traditionally were computational bottlenecks," said Tarek El Dokor of Edge 3 Technologies, as he demoed his company’s gesture recognition interface to surf the web using two commodity cameras that weren't even hooked together. The CPU usage amounted to only 3% of the computer's capacity -- the equivalent of an optical mouse -- with the rest handled by the GPU.    

Likewise, Laurent Gil of Viewdle talked about the business possibilities of licensing his company's picture and video recognition and sorting capabilities, trying to become a PDF equivalent of visual analysis, for instance. Viewdle's ability to organize 10,000 photos in two minutes is handled through the GPU. Viewdle was started in the Ukraine and currently has 24 employees in Kiev, 22 of whom are engineers.    

And London-based MirriAd CEO Mark Popkiewicz is enmeshed in a very hot area: Helping video content creators monetize their content through contextual, dynamic ad placement. This involves placing, say, a box of Cheerios in an old Cosby Show episode or wrapping a Coke ad around a park bench in Forrest Gump, but doing so in a way that it is "completely contextual and comfortable." The GPU is helping him do this more realistically than ever before and doing it quicker, too. The company already has partnerships with ABC, Comcast and NBC, among others. 

 

GPU Technology Conference 2nd Day Keynote
By NVIDIA, posted Oct 1 2009 at 01:18:55 PM

This video was taken of attendees as they exited the 2nd day keynote given by Hanspeter Pfister, Harvard University, at the GPU Technology Conference held from Sept. 30 to Oct. 2, 2009 in San Jose, California.

 

Ladies and Gentlemen, We Have a Winner
By NVIDIA, posted Oct 1 2009 at 12:02:11 PM

We have a set of winners, of the CUDA Superhero Challenge competition, that is. 

The contest, which was open to over 200,000 members of TopCoder, had a superficially simple premise. Entrants were to implement an optimized CCL – connected component labeling – algorithm to process seriously hi-rez 200MP+ images from the Hubble Space Telescope. 

Coders had a week to learn CUDA, many for the first-time ever, and then 11 days to do their work. More than 220 entries were received and judged based on accuracy and raw speed! The winners were announced at the GTC’s Research Summit event. They are:

1st place Micha Riser, Zurich, Switzerland iquadrat $2500
2nd place Hou Qiming, Tangshan Province, China b285714 $1000
3rd place Sergey Ilin, Omsk, Russian Federation nemossi $750
4th place Jaco Cronje, Rietvalleirand, South Africa JacoCronje $500
5th place Noriyuki Futatsugi, Tokyo, Japan foota $250

The next CUDA challenge will take place in late November – if you want to show your CUDA-fu sign up now at www.topcoder.com.

 

This is Your Brain on GPU
By Michael Diamond, posted Oct 1 2009 at 11:34:45 AM

Hello GPU fans. I just came back from Hanspeter Pfister's Harvard University High-Throughput Science GPU Technology Conference Keynote, really fascinating stuff.

  

Freeze a brain, cut out the part of interest, slice it wafer thin over and over again, image it, reconstruct it, and then visualize it. 

 

Run it 23x faster on GPU vs CPU. 

 

Look at your results.

 

Set up a scaling laboratory with robots for loading the brain wafers, scan, and load into racks of GPU clusters, rinse and repeat. 

 

Soon enough, shrink it, make it real time, and bit-bam-boom, you've gotta tricoder, good stuff Maynard ;-)

 

 

 

Attendee Impressions, Opening Day Keynote
By NVIDIA, posted Oct 1 2009 at 10:58:25 AM

This video was taken of attendees as they exited the GPU Technology Conference opening keynote at the GPU Technology Conference held from Sept. 30 to Oct. 2, 2009 in San Jose, California.

 

Hanspeter tackles the big questions...
By Chris Kraeuter, posted Oct 1 2009 at 10:57:14 AM

Questions: How is the brain wired, how did the universe start, how does matter interact at the quantum level, how does the human visual system work, how can we prevent heart attacks?  Answer: Hanspeter Pfister. Yes, that's the answer. Hanspeter wrestles with some of the largest questions faced by scientists and researchers from his post as a computer scientist at Harvard University. And, as you can imagine, tackling those questions require some heavy compute power.  

For instance, a project known as Connectome (so dubbed because this project is on par with the Humane Genome Project) is trying to map the wiring diagram of the brain down to individual neurons and synapse connections. Hanspeter estimated that mapping just 1 cubic millimeter of brain tissue would require 1.5 Petabytes of storage. "This will be an exascale-size data project," he said, echoing some of yesterday's comments envisioning the roadmap to make this a reality.    

He also discussed a radio astronomy project known as the Murchison Widefield Array attempting to understand what happened in the time frame 300,000 years after the Big Bang to 1 billion years after the Big Bang. Apparently, and I was unaware of this, not much is known about that stretch of time. In conjunction with the Harvard Center for Astrophysics and others, they are building in middle-of-the-middle-of-nowhere Boolardy, Australia, an antenna array and "supercomputer" center (in a temporary trailer) that can only pull 20kW to do all of the necessary computations. With such a power constraint, Hanspeter opted for a GPU cluster to churn the data.    

Hanspeter sees a growing necessity in moving data crunching capabilities closer to the source of the data collection to eliminate the bandwidth restrictions of transmitting data (whether that be via data pipes or via FedEx). Other benefits Hanspeter gets from bringing an inexpensive computational source such as GPU clusters closer to the data: Creation of a feedback loop and an ability to scale systems.  

The projects he discussed on Thursday were of immense scope, scale and complexity. Utilizing new computing techniques to tackle them has provided him with new opportunities, but he did highlight some challenges with using GPUs for high throughput computing, namely the need for higher-level programming models, scalable programming models, and plug-and-play parallel high-throughput I/O. Scientists, for example, want to use the programming languages they already know so he said more accommodation of domain-specific languages would be helpful. 

 

Party Like It’s 2009
By Michael Diamond, posted Oct 1 2009 at 10:31:42 AM

I’m writing from the GPU Technology Conference 2009 Exhibitor Showcase.

It is 6:30pm Wednesday night, and the showroom floor is packed.

Here are the key things you need to know about planning a professional gathering like this to make sure it's full of enthusiastic happy attendees.

Step 1: Open bar. In this case, six of them.

Step 2: Meat so large that it can be sculpted into abstract art

Step 3: interesting interactive demos. Like this augmented reality demo from seac02

  

Or this cool surround screen immersive flight demo by Scalable.

 

Or like this interactive computer vision 3D Vision game demo from softkinetic. Yes, that will be you soon enough, at home, work or play, so better work on your moves.

Step 4: Mix in tons of serious tech demos.

Like Tesla servers? Check out the Teraflops rack on this one from Colfax

DSC02584

And this really impressive workstation PX100 thin client remote demo by Dell

 

Or this high performance workstation multiscreen synch demo by Boxx

 

All in all, it was about having lots of the world’s leading GPU technology professionals in one spot, mixing it up, networking, sharing, planning, learning, and having fun.

 

There's Only One Way to Rock
By Chris Kraeuter, posted Oct 1 2009 at 09:40:38 AM
Second day of Nvidia's GPU Tech Conference and another packed agenda coming your way. Yours truly will be attending Hanspeter Pfister's keynote (starting momentarily) and then sitting in on the Emerging Company address and the panel discussion on Future Directions of GPU Computing. After lunch, I'll be bringing you recaps of the Interactive Ray Tracing with OptiX Ray Tracing Engine followed by the Supercomputing Super Session. The room is filling up in anticipation of Hanspeter's talk now. It's a bit early to be listening to Van Halen's "There's Only One Way to Rock" at the incredibly loud level that's pumping through the speakers, but that seems to be getting everyone geared up for the day's events! More soon. 
 

09/30/2009 3D Cameras, Augmented Reality, Early Disease Detection = Inside Out Thinking
By NVIDIA, posted Sep 30 2009 at 09:15:49 PM

Dan Vivoli, NVIDIA SVP, gives a dynamic wrap-up of day one at the GPU Tech Conference in San Jose, California.

 

Big computers come with big problems
By Chris Kraeuter, posted Sep 30 2009 at 08:22:44 PM

High performance computing (HPC) experts took to the GTC stage Wednesday afternoon to discuss the pain points and opportunities in the upper realms of computing. Each seemed to be looking at a future that will require new approaches to meet an exascale horizon.

Bloomberg CTO Shawn Edwards talked about the immense demands of trying to price 1.3 million bonds in an eight-hour window overnight, every night. Jeffrey Vetter, a computer scientist at Oak Ridge National Laboratory, looked at the challenges of adding computational demands like dynamic vegetation and stratospheric chemistry to an already hefty climate modeling workload. And Cray CTO Steve Scott echoed everyone's concerns surrounding power envelopes in their massive data centers.

Steve, who joined Cray 20 years ago and previously was the chief architect of the Cray X1 scalable computer, estimated an exaflop computer system, targeted for the second half of next decade, that would consume 100MW of power. That's fairly unrealistic on economic terms and undesirable on environmental terms.

Jeffrey showed Oak Ridge's roadmap to an exaflop system and highlighted the need to start building a new facility in a couple of years that could indeed accommodate a 100MW power supply across 260,000 square feet, but said, "It's not practical to build an exascale system." He leads Oak Ridge’s Future Technologies Group and directs the Experimental Computing Laboratory. And if that isn't enough, he's also a professor at Georgia Institute of Technology.  

Even now, Jeffrey is running into energy constraints at Oak Ridge’s current data center, which covers 40,000 square feet and sucks down 12MW of power. "We have floor space but not the power to add more machines," he said. 

Each speaker acknowledged the need to start doing things differently to keep pressing forward with solving the problems that HPC should be addressing. 

"CPU sequential computing has hit a wall," said Bill Dally, who joined Nvidia in January as chief scientist. He's a legend in parallel computing, previously chairing the computer science department at Stanford for a decade and, before that, doing pioneering work at Caltech and MIT. "More and more of the value of the computer system is not being delivered on the CPU, it is being delivered on the GPU."

Dally showed some pretty impressive stats from oil and gas giant Hess on seismic processing, comparing a deployment of 32 Tesla S1070s vs. a deployment of 2000 CPU servers. For equal performance, they’re citing 31x less space, 20x lower cost and 27x lower power. "It's just a more efficient way to get the job done," he said. 

Likewise, Bloomberg's Shawn Edwards extolled the benefits his global media and data company received after shifting to a heterogenous CPU-GPU environment and he's looking forward to applying his learnings to other issues his customers face, such as derivative pricing, risk management and portfolio valuation. "We're solving our customers' problems and they don't care how we compute it -- they just want the answers when they need them."

But there's always something to worry about. Asked about other major roadblocks to an exascale future, Steve said programmability remains a major challenge while Jeffrey cited limitations in chipsets as potentially constraining traffic flow.  

 

GPU Computing Overview (Session 1412)
By NVIDIA, posted Sep 30 2009 at 08:09:38 PM

This video was taken of attendees as they exited the GPU Computing Overview (Session 1412) at the GPU Technology Conference held September 30, 2009 in San Jose, California.

 

Shazzam, I Have Seen the Light, and it is Fermi
By Michael Diamond, posted Sep 30 2009 at 07:16:18 PM

I just witnessed a testament to human ingenuity at GTC 2009, where NVIDIA CEO Jen-Hsun Huang introduced NVIDIA’s new Fermi GPU architecture.

Years in the making, and it is finally here.

 

Fermi to me feels like this: Start with an SR-71 Blackbird for speed, add two parts Prius for fuel economy, add 100 gallons of Moonshine for kick, three parts light saber for lethality, DNA from Megan Fox for sex appeal, fold space like in Dune so you can get close to a dark star with enough gravity to smash it all into a finger nail sized chip, hit it with millions of lightning bolts lines of driver code written by GPU-Jedi-Masters, and then bake until ready.

 

Fermi has three billion transistors on 40nm, 512 CUDA cores, eight times the double precision compute, IEEE 754-2008, ECC memory, support for Fortran, C++, C, OpenCL, DirectCompute, Java, and Python, and to top it off, Nexus, the world’s first fully integrated computing application development environment within Microsoft Studio.

All I can say is Wow, can’t wait to get it.

 

 

09/30/2009: Important Trends in Visual Computing
By Chris Kraeuter, posted Sep 30 2009 at 06:49:30 PM

There are many problems still to be worked out and many questions without answers in the visual computing world, but that's not daunting to leaders in the field. 

"There is a considerable amount of improvement we still need to achieve so it's a great time to be here," said Horst Bischoff, who travelled from Austria from his post as a professor at the Institute for Computer Graphics and Vision at the Technical University Graz to speak at NVIDIA's GPU Tech Conference.

The challenges involved in solving his "holy grail" questions -- segmentation, correspondence, and recognition -- are balanced by recent progress made in the field of visual computing, helped by the advancement of hardware. And, believe me, this is something he's thought a lot about -- he's published more than 400 peer reviewed scientific papers focused on object recognition, visual learning, motion and tracking, visual surveillance and biometrics, medical computer vision, and adaptive methods for computer vision.

A fast-paced talker, he cited a project unveiled last week drawing on 2,000 photos with 819,000 points of The Colosseum in Rome posted to Flickr and then rendered into an impressive 3D model. He's not affiliated with that project but matching and reconstructing everything took 21 hours (see it here). He also showed a real-time optical flow demonstration of him on stage, nicely illustrating the pairing up of the possible with the practical. 

Likewise, Blair MacIntyre, an associate professor at Georgia Tech, returned to the melding of the physical and virtual world with the Ferrari tire in Jen-Hsun's keynote as a demonstration of the growing capabilities in this area. He's spent the past 18 years conducting Augmented Reality research and has directed GVU Center's Augmented Environments Lab for 10 years. He sees even greater potential for augmented reality on mobile devices. 

This is key because although high power compute systems power the best AR systems today, the integration of the physical and virtual in everyday life with everyday technology will lead to the greatest transformation. At this point, we'll actually be able to be immersed in a mobile AR world: "We need the graphics integrated with the world around us," he said.

Pat Hanrahan pondered some of the more vexing questions preventing the widespread use of graphics in people's work and personal lives. Pat, the CANON professor of computer science and electrical engineering at Stanford, who has both impressive academic chops and business chops, admitted to not really using graphics much -- even at work. "We produce computer graphics, but we don't use them!" This from a guy who as a founding employee at Pixar (btw, he's also founded two other companies and previously servied on the faculty at Princeton).

He's focused on trying to better visually represent massive databases, showing how data can be thought of as an n-dimensional image. The opportunity is clear: Information is everywhere, with Pat noting that each individual produces about 2GB of info a year and Wal-Mart is kept busy storing 1/2 Petabyte of info. He admitted to not having the answers, but he was incredibly excited by the possibilities of manipulating and interacting with data the same way we all have become accustomed to flying around the world in Google Maps. 

"This is extremely hard and most databases aren't built for this," he said. But, like his previous speakers, he envisioned visualization eventually becoming available anywhere and everywhere at the same time.  

 

Nvidia's Jen-Hsun Huang's Keynote At GPU Tech Conference, Pt. 2
By Chris Kraeuter, posted Sep 30 2009 at 02:31:46 PM

Jen-Hsun is taking us through the technology evolution of the company, starting with the Riva 128, which had only 3 million transistors on it. "The last 16 years felt like warp speed to me -- and I was watching it from inside the tornado," Jen-Hsun said. (btw, the presentation is continuing in 3D -- this could be the wave of the future for presentations.) He's now extrapolating out the rate of advancement in computing in the past and carrying that forward into the future another decade, imagining the power of today's fastest supercomputer functioning as your personal computer in only 10 years. "What would you do with 1 petaflop?"

"What we want to do is create a photorealistic image," Jen-Hsun said. And NVIDIA isn't far off. He just demoed an interactive ray tracing simulation of a sports car at dusk that couldn't have been done previously. The door opened and the scene was resimulated realistically within seconds, showing accurately the direct and indirect shadows on the open door. "When you have more and more computational resources, we will take fewer shortcuts. We are trying to create more exquisite images."

 


Parallel Computing

Jen-Hsun wrapped up the visual computing part of the presentation and we're done with the 3D glasses. Bummer, but it does make typing easier. We're now talking parallel computing, especially NVIDIA's CUDA architecture. Couple of details on the CUDA ecosystem: More than 90,000 developers focused on CUDA and 200 universities teaching CUDA as part of their computer science programs. "This is the most pervasive parallel computing architecture ever," Jen-Hsun said.

We're now getting to the impact of speed advances and what it means to people, to industries, to cultures, to life. "What does it mean to someone who cares how long it takes to do something when you can speed things up 140 times, 100 times or even 50 times? It's like being able to go from San Francisco to New York in three minutes. A speed up of that kind is transformative. It would completely transform adjacent industries."

Indeed. Jen-Hsun just highlighted how parallelism benefitted Johns Hopkins and its important work in simulating the first 10 seconds of a levee break, when greatest damage is done. That simulation was compressed from 24 days to 4 hours with the CUDA architecture. More importantly, they can increase the fidelity of their simulations, Jen-Hsun said.

He then brought out David Robinson, CEO of Techniscan, to talk about the Utah-based company's work utilizing ultrasound for early tumor detection targeted at catching early signs of breast cancer. The processing task: 9 million Voxels and 120 million of FFT calculations. Four CPU clusters would take more than an hour, while two Tesla C1060s would take less than 30 minutes for a GPU performance/price improvement of 8x. "The idea that we can detect more cancers, smaller cancers, cancers in women that mammography doesn't serve well -- that's exactly our goal," David said.

Jen-Hsun is now discussing the new CUDA GPU architecture, codenamed Fermi. “It’s an absolute powerhouse.” Fermi has 3 billion transistors and features up to 512 cores. The GPU has six 64-bit memory partitions, for a 384-bit memory interface, supporting up to a total of 6 GB of GDDR5 DRAM memory. Oh, and there’s native support for C++ and this is the first GPU with support for ECC. He just held Fermi up and the camera flashes in the room just exploded. "We have a small surprise for you -- [our engineers] worked around the clock to get us a working system." That demo is coming up.

The demo of the new Fermi silicon, generation to generation, reflected a speed up 5 times. Talking architecture now with Jen-Hsun saying, Fermi is the soul of a supercomputer in the body of a GPU. He's hitting all the news highlights now, and is talking up Nexus, the first development environment for massively parallel computing integrated into Microsoft Visual Studio. Beta version coming out Oct. 15.

Jeffrey A. Nichols, associate laboratory director for Computing and Computational Sciences, Oak Ridge National Laboratory, just talked about how he plans to leverage the increased power of Fermi by developing climate models that break down regional weather patterns, which require increased physics and resolutions and compute power.  Beyond today's announcement covering Fermi and ORNL's coming supercomputer, Jen-Hsun asked what else Nvidia can do for him? "Another order of magnitude or two in peak performance would be great," Jeffrey said.

"Sure, no problem," Jen-Hsun responded.

That wraps up the parallel computing section. Jen-Hsun has moved forward to the Web Computing section.


Web Computing

Johnny Loiacono, SVP of Creative Tools Business, Adobe, leads off this discussion by noting that Adobe measured video streamed to desktop via Flash two years ago at about 3 to 4 Petabytes a month and now that number exceeds 60 to 70 Petabytes a month. Mobile is also in effect here with Johnny highlights that Flash is on 1.2 billion mobile handset devices. "We see mobile devices moving far beyond laptops and notebooks," Johnny said. He then played a HD video streaming over the web in Flash, the first time that's been done, Jen-Hsun said. It streamed seamlessly, no jumping, and very crisp.

Next up, Michael Kaplan of mental images, the NVIDIA subsidiary, showed some photorealistic images of a very cool office environment in natural light, but it was all rendered in real time from his standard laptop computer. He accurately simulated light changes throughout the day and movement around the room. All of which crystalized in seconds. He also mocked up a completely new configuration in the room, such as inserting  a desk in a blank space in the room (which consisted of billions of sample calculations in a matter of seconds). Wow.

RTT CEO Ludwig Fuchs came on stage to show off the power of GPU computing with a Ferrari customization application that allowed a raft of options, such as contrasting stitchings and colors and various texture options for different parts of the dashboard (among many, many other options), all reflected instantly on the screen, essentially designing the entire interior from the ground up.

Jen-Hsun and Ludwig then went over to a Ferrari tire on the stage and showed on the screens a video of the same tire, but with a simulated rim inside of it, complete with disk brake and the Ferrari logo showing clear. And then they moved a direct light around the tire, which reflected accurately and in real-time on the simulated rim on the screen. Very cool.

And that brought to a close Jen-Hsun's keynote, with him summing up the demos that had been put on during his talk: "We've now moved the GPU from being a computer graphics centric device to being a general purpose device."

Indeed.

 

Nvidia's Jen-Hsun Huang's Keynote At GPU Tech Conference
By Chris Kraeuter, posted Sep 30 2009 at 01:25:19 PM

NVIDIA hosts its second annual GPU Technology Conference this week and CEO Jen-Hsun Huang will going to kick off the festivities with a keynote soon. I'm live blogging this session and this post will be updated throughout the keynote. A press contingent of more than 100 has streamed into the the Regency Ballroom at the Fairmont Hotel in San Jose. The wider doors have now been flung open and another 1,000 people are packing this room. Everyone's been given 3D glasses. The room is buzzing, rock music is blaring through the speakers, and a video of graphics and processors is looping.

Stay tuned to this blog and other NVIDIA channels throughout the next three days for video dispatches of attendees and a steady stream of commentary from the likes of Kevin Krewell and Mike Diamond, even cartoon posts from Steve Lait (a Pulitzer Prize nominee, cool!). You can follow the action here on this blog and on Twitter at @nvidia_news or @nvidiadeveloper with hashtag #gputechconf.

The three-day show is jammed with presentations that I'm looking forward to recapping for everyone: After the keynote today, I'll be covering the Important Trends in Visual Computing panel, featuring three professors dishing on computer vision, augmented reality and visual analytics, and then the Breakthroughs in High Performance Computing panel, focused on the future of HPC and the GPU (I'll try and keep the acronyms to a minimum, but we are talking tech here, so it does go with the territory). More momentarily.

Already, we’re seeing a stream of news cross the wires covering off some of what Jen-Hsun is likely to address. Nvidia just announced its new CUDA GPU architecture, codenamed Fermi, to accelerate performance on a wide array of computational apps. Most striking, Oak Ridge National Laboratory will build a new supercomputer using NVIDIA GPUs based on the Fermi architecture that will be 10 times more powerful than today’s fastest supercomputer. Wow. News also from PNY Technologies, mental images, EM Photonics and others. 

Show is starting off with instructions for everyone to put on their 3D glasses. The video playing of cars, animated figures, sports video games, fire, and facial rendering is pretty stunning. Jen-Hsun has been introduced and he's instructed everyone to keep their glasses on. He's larger than life on two massive video screens flanking each side of the stage, in 3D, with real-time calibration going on against a stream stream of bubbles. Pretty impressive. By the way, standing room only in this room with people lined up against the back wall.

 

Do You Feeelll, Like I Do?
By Michael Diamond, posted Sep 30 2009 at 12:49:00 PM

GTC 2009 is about to get underway, and it’s a perfect day for everything GPU, with folks coming in from all over the planet. It’s going to be a hopping event.

The first thing that you notice when you walk into the Fairmont in San Jose, is Leonardo, the gigantic paintball gun with 1,100 barrels, built by Adam and Jamie, hosts of the well-known MythBusters show.

 

If you have not seen it in action, or just want to relive fond memories from last year’s NVISION conference when it was rolled out in all its glory, it’s worth a watch

Looking at the GTC presentation agenda, two things really caught my attention.

First, most of the papers aren’t being presented by NVIDIA. Second, out of the approximately 130 hours of content, almost none of them is about graphics for gaming.

Don’t get me wrong. I’m a gamer at heart, and I have been all my life.If anyone comes out with a Peter vs Chicken fight game, I will be at the head of the line to buy it. It’s the best fight scene ever.

So then what is everyone excited about at a GPU Technology conference if it’s not about gaming?

In short, the massive power of GPU parallel compute is enabling incredible, mind-shifting applications that run orders of magnitude faster, in some cases more than 100x faster, than conventional systems.   The new norm, the excitement in the industry is about GPU parallel compute.

I wonder who will be the first to make a CUDA version of Peter Frampton’s famous TalkBox audio synthesizer version.

Tell me, Do you feeelll, like I do?

 

I Am At One With My Inner GPU Nerd
By Michael Diamond, posted Sep 30 2009 at 12:35:11 PM

Hi, it's Michael Diamond reporting with color commentary from the GPU Technology Conference (GTC), the place you need to be if you want to know what is going on in the GPU industry.

The inner GPU nerd in me is in full bloom, and I have to admit that I kinda feel like this baby inspired by Beyonce.  I am inspired by all things GPU!

 

GTC 2009 is being held at the handsome Fairmont Hotel in San Jose. To see it in 3D, download Google Earth, and then type in “Fairmont, San Jose, CA” into the navigation address.

 

Once in Google Earth, fly over to the home that you grew up in, it is a lot of fun, and brings up memories.It is a beautiful day in San Jose, and I can’t wait to share with you in later blogs some of what is going on at GTC.

 

09/29/2009: Top 10 Must-See Sessions at the GPU Technology Conference
By NVIDIA, posted Sep 29 2009 at 05:07:11 PM

With the GPU Technology Conference less than a day away we are re-posting the following blog entry from our GPU Computing technologies product manager (Will Ramey).

As a product manager for our GPU Computing technologies and a former software engineer, I’m interested in a wide variety of topics and technologies, so I’ll be spreading my time between sessions on developer tools, programming techniques and research topics.

In addition to the pre-conference tutorial I’m presenting on the first day, there are more than a dozen tutorials and over 100 sessions covering everything from algorithms to visualization.

Looking through the sessions for the Developer Summit and Research Summit, here are my Top 10 so far, in no particular order:

* Reconstructing the Brain: Extracting Neural Circuitry with CUDA and MPI (1075)

I’ve always wanted to know more about how my brain works, and this Harvard researcher is modeling the brain using MPI on a CUDA cluster.  Way cool.

* Advances in GPU-based Image Processing and Computer Vision (1020)

The whole idea of software being able to recognize objects and decipher information in the physical world fascinates me.

* Large-Scale Text Mining on the GPU (1025)

I want to learn more about the kinds of database operations benefit from GPU computing.

* Performance Primitives for Video Codec and Image Processing (1028)

Sounds like collection of GPU-optimized Lego blocks that can be used to build tons of different image filters, video codecs, etc.  Who doesn’t love Lego?

* You Might Also Like: A Multi-GPU Recommendation System (1034)

Sifting through tons of data to provide me with a personal recommendation for a new book, movie or food I’m likely to enjoy sounds like a good thing.  But how do they figure out what I’ll like?

* Mapping Satellite Imagery on the GPU: Fast Orthorectification and Pan-Sharpening (1037)

I want to know how soon they’re going to be able to read my license place from space.  This one makes me just a tiny bit anxious but it’s better to face your fears head-on, right?

* NEXUS: A Powerful IDE for GPU Computing on Windows (1023)

This new tool integrates support for GPU debugging & profiling in Visual Studio and is getting rave reviews from the developers participating in the preview program.  There’s a hand-on lab (1098) running all day Thursday and Friday, so you can give it a try too.

* OPLib: A GPL Library of Elementary Pricing Functions in CUDA/OpenCL and OpenMP (1005)

I’ve been wanting to learn more about how OpenMP applications can take advantage of the CUDA architecture, and maybe I’ll learn how to make some money in the stock market at the same time.  :-)

* Interactive Ray Tracing with the OptiX Ray Tracing Engine (1048)

I saw some of their amazing demos of interactive ray tracing several months ago, and hear they have even more realistic demos ready for GTC.

* Zombies on Tegra: A Case Study in Mobile Augmented Reality (1069)

This one sounds like a fun session that will raise my expectations for gaming on mobile platforms.  Did you know that the Tegra processor is powering the new Zune HD?

* Face Recognition for Photographs and Video (1070)

I’ve been playing with the face recognition features in Picasa web albums, and it’s pretty good most of the time.  I wonder how long until my camera can name the people (and places) in my pictures as I take them.

* Driving on Mars: Simulating Tracked Vehicle Operation on Granular Terrain (1106)

Designing semi-autonomous robots for a Mars mission?   Count me in!

* Convolution Soup: A Case Study in CUDA Optimization (1401)

When it comes to image processing, Joe is one of the smartest guys I know so I’m sure I’ll learn something new in this session.

* Directing Experiments in the International Space Station with GPU-Assisted Image Analysis (1437)

I’ve always wanted to learn more about the kinds of experiments they do at the international space station.  What kinds of experiments can only be done in zero-g anyway?

*  Languages, APIs, and Developer Tools for GPU Computing

This is the tutorial I’m presenting, so I can’t miss this one!   You shouldn’t either if you want to get your arms around the basics of GPU Computing before attending the more advanced sessions later in the conference.

OK, that’s more than 10, but it’s the best I can do for now.  I’ll have to whittle down the list later.

There are also a bunch of interesting companies presenting at the Emerging Companies Summit, but I’ll let you explore that on your own since everyone with a Full Conference Pass or Research Summit Pass can attend these sessions as well.  The full sessions catalog is available at http://www.nvidia.com/object/gpu_tech_conf_agenda.html.

If you’d like to share your own Top10 GTC sessions or ask questions about the conference, please post them in our developer forums at:
   http://forums.nvidia.com/index.php?showtopic=106017

Hope to see you in San Jose on Sept. 30!