Webinar with Stanford Health Care for an informative conversation on benchmarking the performance of Enterprise Imaging systems.

Recording here

Of all the topics that come up during a Dicom Systems introduction, the most common question relates to systems performance.

Theoretical performance boundaries are marginally useful.  While it couldn’t hurt to know how a system would perform under the most ideal, controlled conditions, too many variables influence performance in real-world context, making it futile to articulate a standard cookie cutter answer or even a reliable range. Our customers’ unique infrastructures invariably present idiosyncrasies that won’t fit within a standardized profile.  

Instead, we’ve benchmarked the platform using applied infrastructure answers in collaboration with one of our original customers and high-volume Teleradiology end-user who processes over 1.5 million studies a year and performed a series of tests to determine how transformation and workflow rules affect performance. The punchline, based on real-world benchmarking, is described in our White Paper titled “Can a single DICOM router process and route 4.1 Billion images per year?” (Click here to download the full analysis)

Enterprise Imaging Infrastructure

From an infrastructure perspective, the first element to consider is the choice between hardware deployment vs. virtualized environment (VMware of Hyper-V).

For physical servers, Dicom Systems chooses to deploy the Unifier platform on standard Supermicro hardware – we are not in the hardware business, so it’s important for our support engineers to rely upon a well-known, predictable spec.  It’s also much easier to ship standard hardware for smooth and quick field replacements rather than support a multitude of hardware platforms.

For virtualized appliances, Dicom Systems has a standard recommended configuration.  Virtualized environments have quickly become a preferred method, as it is far quicker and easier to manage and update the resources of a VM than to upgrade or replace a physical server.

The Dicom Systems platform is Linux-based, which eliminates many of the Windows-related licensing, configuration and performance challenges.  By deploying self-contained Linux-based appliances, the ecosystem is far more resilient and redundant than applications that depend on Windows for availability.

Firewalls, Load Balancers and Network Configurations

Once the infrastructure is in place, and all the DICOM associations are established, it’s time to examine the context in which the appliances are installed.  Are there additional firewalls and/or load balancers that can influence the performance of the Unifier appliance?

Some of our largest Enterprise customers routinely load balance traffic between primary and secondary data centers, and any additional firewall rules or port conflicts can impair the appliance’s performance.  Network configuration management and documentation thereof is one of the most important steps in the deployment and maintenance of Dicom Systems Unifier appliances.

Contextual Ecosystem

The actual context is where most of the variables can influence performance.  The limiting variables are memory (RAM), computing power (cores), bandwidth and storage (disk speed).

Storage – Disk Speed

The vast majority of competing interoperability platforms require transformations to be committed to a database, which dramatically overloads I/O with unnecessary steps.

By contrast, the Unifier platform performs all of its functions in memory (RAM), a substantial performance advantage. The majority of tag morphing, DICOM and HL7 transformations and other imaging workflow-related changes happen in memory.  The Unifier utilizes short-term storage for the actual image data. This distinctive element makes the platform highly dependent on I/O capacity. The faster disk speed available to the Unifier, the better the platform performs – SSD is highly recommended.  If no SSDs are available, the Unifier can, of course, perform very well if given spinning disk vs. SSD drives, just at a lower performance expectation.

Bandwidth Utilization

Out of all the variables that influence performance, bandwidth is one of the most unpredictable factors. Some IT departments deliberately throttle or limit available bandwidth for specific applications or departments.  Even if a gigabit or multi-gigabit networking is available throughout the health system, an arbitrarily finite portion of the bandwidth can be actually utilized; IT has to make evidence-driven choices to allocate available network resources.

What is the size of your average payload?  Are you moving mostly X-ray images or large multi-slice CTs?  Is Oncology part of your practice? Are you producing or transferring mammography-related and Tomosynthesis images?  Are you consistently applying the appropriate level of compression (transfer syntax)? Different exam types can be treated differently and adaptively.

What time of day are the transfers occurring?  The time during which an image transfer takes place is an important factor – if the entire healthcare enterprise is competing for scarce bandwidth simultaneously at 12noon, the throughput cannot be expected to be the same as a middle of the night transfer.

For the purpose of this exercise, we tested three different hardware profiles running the same Enterprise DCMSYS software suite.

In real-world context, our Enterprise grade appliance can process and route 6727 studies in 30 minutes, which equates to 323K studies a day, or 118 million studies a year. The total population of the state of California is 38 million people and the total population of the United States is 316 million. We know this might sound like an overkill to some, but we have many clients who are using our equipment not only for regular traffic, which can be up to 10K exams per day but also for pulling prior exams.

For instance, imagine an Oncology scenario in which PET exams are being transferred.  PET scans can be up to 1G and; in addition to the most recent scan, there may be 10 prior studies or more. Let’s take a real-world scenario where one CT is 880 MB is being transferred/tag-morphed, along with 16 priors with a grand total of 15GB of data. If the appliance is running on a standard-sized device, without optimized software and hardware, with average 15 Mbytes per second it will take at least 1,000 seconds to transfer the data.

Based on the benchmarking test results, the DCMSYS appliance with an average (not peak) transfer of 144 MB/s, will take 100 seconds to deliver those images. It means DCMSYS can save 15 minutes of a radiologist’s time just in this one reading instance.

During this test, we maxed out the SSD write speed.  144 MB per second is a realistic average SSD speed for real-world data transfers, including tag morphing.  Disk performance and network bandwidth are clearly the most conclusive factors influencing performance bottlenecks, ahead of our Unifier application performance itself.

Check out the webinar that accompanies this blog here.