This is a guest post by Christopher Harr, senior site reliability engineer, RSNA. Christopher is responsible for architecting and maintaining RSNA’s infrastructure on premises and in the cloud.

 About Customer: The Radiological Society of North America (RSNA) is a nonprofit organization that promotes excellent patient care and healthcare delivery through education, research, and innovative technology. Around 52,000 members travel from 145 countries around the world to attend the conference. Perks include free access to continuing medical education (CME) courses, grant opportunities, and admission to the annual meeting, which is the premier event for radiologists.

When your doctor orders a medical imaging exam like an X-ray, MRI, or ultrasound, you’re counting on the radiologist to be up to date on the latest science. That’s where the Radiological Society of North America (RSNA) comes in. IT plays a big role in everything we do for members, from our COVID-19 image library for researchers to a Case of the Day hands-on educational website.

The sheer size of medical images makes our job challenging. For example, the COVID-19 imaging library we’re building for researchers around the world already contains 600,000 images totaling 80GB—and that’s just from one of several institutions that will submit data sets. To make sure researchers around the world can quickly access and download images for their projects, we need a storage platform that scales easily and has very low latency.

Remembering the “Dark Days”

When I joined RSNA in 2013, I inherited a storage array with high latency and bottlenecks. Those were dark days. We were always on call, crossing our fingers that we wouldn’t have to go into the data center to reboot the array. Developers complained that the storage hardware slowed down their code. The final straw was when we found out that an upcoming operating system upgrade would require us to completely wipe and rebuild the array. We needed storage that was faster, more reliable, and more scalable.

Sign up for email

In 2015, we conducted proofs of concept with several storage vendors. As a nonprofit, we didn’t want the expense of a public cloud if we could avoid it.

Case of the Day Website—65 Million Hits Transferring 366GB

Pure Storage® blew all the other arrays out of the water. Pure FlashArray™ was scalable, fast, and redundant. And it had monster compression—up to 4.2:1 data reduction and 20.4:1 total reduction. The clincher was that we don’t have to migrate data during upgrades. Pure just comes in and swaps out the controllers, with zero downtime. Since then, we’ve gone through several nondisruptive upgrades, and today we use the FlashArray//C device with an Evergreen™ Storage subscription.

“Spooling up a new environment is expensive in the cloud and took 15 to 20 minutes on our old storage array. With Pure, developers can spin up their own server in less than a minute, test their code, and then spool it back down.” –Christopher Harr, Sr. Site Reliability Engineer, RSNA

Our Case of the Day program illustrates how Pure supports our educational mission. Every day during our six-day annual meeting, we post a new image or short video for members to interpret. These are very complicated cases, and right answers count toward CME credits. Members can view the Case of the Day on monitors in the event hall or on their mobile devices. The images are up to 25MB, sometimes even larger. To load the Case of the Day web page within our two-second target, the storage platform has to respond to requests in less than two milliseconds. Before we switched to Pure, the only way we could deliver that performance was to physically transport servers with local storage from our data center to the event venue. That was tense!

Now that we host the Case of the Day website on FlashArray//C, we don’t need local storage at the annual meeting. The image loads quickly on any browser. The big test came in 2020 when we had to hold the annual meeting virtually. Over six days, the Case of the Day website received 65 million hits and transferred 366GB of images. FlashArray performed flawlessly, delivering a great experience to members around the world, even during the busiest times.

The Scale for Global Research

That COVID-19 image database I mentioned? We’re getting 4:1 data reduction on the FlashArray//C device—amazing. COVID-19 researchers can connect to our VPN to download an 80GB data set as fast as brewing a pot of coffee depending on internet speeds. Faster access to images makes more time for potentially life-saving research. This research is game changing, and it’s importance cannot be understated. And stripping off personal health information (PHI) from images for HIPAA compliance is faster and cheaper than it would be in a public cloud.

Faster Provisioning for Faster Innovation

Another big benefit of Pure for RSNA is faster software development for radiology research and education. Developers can code and test faster when they have their own environment. But spooling up a new environment is expensive in the cloud and took 15 to 20 minutes on our old storage array. With Pure, developers can spin up their own server in less than a minute, test their code, and then spool it back down. We’re working to make it even easier, with one-click creation of a development and test environment that’s automatically destroyed after 24 hours. Personally, I’m happy that our developers are no longer saying, “Your hardware is too slow!”

To sum it up, Pure is the heart and soul of our IT operations. It’s what makes our member services run flawlessly and fast, supporting our mission to promote excellent patient care and healthcare delivery. Pure is a phenomenal storage solution. Why pay for the cloud when we can do it ourselves—cheaper and faster?

image_pdf