Converged IT and the Cloud

I promised that I would post a summary from our closing panel at the Converged IT and the Cloud thread at the Molecular Medicine Tri-Conference.

Unfortunately, I was having so much fun in the session itself that I didn’t take any notes at all. Please forgive errors and omissions. My slides are here, but they’re the very least part of the conversation.

I opened up the session with the question that my friend Carolyn posted in the comments of the last post: “What are the biggest barriers to immunotherapy becoming translational (FDA, funding limits, enrollees in clinical trials)? How can patients best support future immunotherapy developments?”.

It sobered the audience considerably, especially when I pointed out that her interest is as a current patient of the system that we all acknowledge has tons of room for improvement.

My point in starting with that question was to move the conversation up a level from IT people talking about IT stuff – and to provide both motivation and urgency. It is very unlikely that a session on “converged IT and the cloud,” would be able to answer Carolyn’s question. That said, we would be remiss to sit around talking about network speeds and feeds, regulatory frameworks, costs per gigabyte, and other technical details without ever engaging with the high level “why” that drives our industry.

Each of the four panelists prepared a brief summary on a specific topic:

Jonathan Sheffi (@sheffi) is the Product Manager for Genomics and Life Sciences within Google Cloud. He spoke about the convergence that he sees in data structures and standards as customers bring different data types like health information, outcomes data, and so on to the “same” cloud. This was pretty exciting to me – since it is the infrastructure groundwork that will support some of the things we’ve been saying about collaboration and integration in the cloud.

Aaron Gardner is with Bioteam, and shared an absolutely whirlwind review of machine learning and AI for our field. The coolest part, to me, was the idea of AI/ML as a de-noising tool. The hope is that this will allow us to take unwieldy volumes of data and reduce them to only contain the necessary level of complexity for a certain task. It took me back to a dimly remembered time when I would talk about “Shannon Information Content” and similar concepts.

I first heard Saira Kazmi speak at the 2017 Bio-IT World, when she was still with the Jackson Laboratory. She had earned a reputation as Jax’s “queen of metadata.” She combined a handful of deceptively simple techniques with an impressively diplomatic tenacity to create a sort of ad-hoc data lake – without ever pausing to go through the most painful parts of the data lake process. Instead they chose to archive first, scrape file headers into a JSON format and stuff it into a NoSQL database, and (my favorite) stored checksums of large primary data files in a database to identify duplicates and support provenance tracking.

Finally, we had ‏Annerose Berndt (@AnneroseBerndt) – who has just finished standing up a genome sequencing center to serve the UPMC hospitals. I asked her to hold forth a bit on security, compliance, quality systems, and other absolutely necessary bits of process discipline.

We shared a wide-ranging and illuminating conversation building on these topics. It was a blast.

As I said from the stage: I really cannot believe that it’s somehow part of my job to have conversations like this, with people of this caliber. How cool!



2 thoughts on “Converged IT and the Cloud”

Leave a Reply to Carolyn Pointer Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.