PharmaSUG Single Day Event
CDISC: The Good, The Bad, The Standards
Novartis Offices, Cambridge, MA
October 22, 2014, 8:00am - 5:00pm

The Single Day Event in Boston was a huge success! Thanks to everyone who presented and participated. Don't forget that all paid registrants will receive a $75 discount for our annual conference in Orlando next May!

Presentations
Title (click for abstract) Presenter(s) (click for bio) Presentation
Lab Data: Challenges and Our SolutionRagini Khedoe, Novartis Vaccines and DiagnosticsPDF(1.3 MB)
Managing Controlled Terminologies across Clinical Lifecycle StagesBrooke Hinkson, Sanofi
& Mihaela Simion, Sanofi
PDF(0.9 MB)
CDISC Validation and OpenCDISC: Illuminating the Landscape Carlo Radovsky, Etera SolutionsPDF(2.3 MB)
Map Metadata – Going Beyond the Obvious/Connecting the DotsGreg Steffens, Novartis
& Praveen Garg, ICON
PDF(0.6 MB)
There is Data Missing, What's Wrong With Your Program?David Franklin, QuintilesPDF(0.2 MB)
Common Misconceptions when Implementing SDTMJerry Salyers, AccenturePDF(1.3 MB)
Thank You to Our SDE Sponsors



...and to our host:

Presentation Abstracts

Lab Data: Challenges and Our Solution
Ragini Khedoe, Novartis Vaccines and Diagnostics, Head Clinical Data Repository Management and Standards

Novartis Vaccines has several sources of lab data, and all need to be integrated into one domain LB including lab specific metadata. I will describe the challenges we faced when implementing CDISC and our solutions.


Managing Controlled Terminologies across Clinical Lifecycle Stages
Brooke Hinkson, Sanofi, Global Head, Clinical Information Governance, Clinical Sciences and Operations
Mihaela Simion, Sanofi, Manager, Metadata Curation


Proposed ways to handle the implementation and maintenance of controlled terminologies, while remaining flexible to ever-changing regulatory reporting requirements and evolving characteristics of study development


CDISC Validation and OpenCDISC: Illuminating the Landscape
Carlo Radovsky, Etera Solutions, Managing Partner and Founder

In 2008, the OpenCDISC Validator launched and made simpler the task of validating SDTM submission datasets and their accompanying define.xml. In short order, with ease of use and zero-cost, it became the default SDTM validation solution for both industry and the FDA. By 2011, OpenCDISC evaluated a submission with 191 unique SDTM checks, categorized as JANUS (93), OpenCDISC (20), Internal (2), and CDISC Terminology (76).

Today, any discussion of validation of CDISC conformance has to start with OpenCDISC. The latest version, 1.5, can be used to validate ADaM, SEND, and multiple versions of SDTM and define.xml. For SDTM alone, it runs 530 unique checks. Gone are the historical categories, and while the additional 339 rules include additional SDTM conformance checks, the majority are data quality checks developed by OpenCDISC with input from the FDA. This presentation discusses:
  • The history and current development of OpenCDISC
  • An exploration of rule implementation and interpretation, both within and beyond the OpenCDISC Validator
  • FDA current and future expectations for CDISC validation
  • Incorporation of validation into business processes


Map Metadata – Going Beyond the Obvious/Connecting the Dots
Greg Steffens, Novartis, Associate Director, Technology Innovations
Praveen Garg, ICON, Director, SAS Programming and Global Strategic Resourcing


Much attention has been paid to the design and use of metadata to store data standards, study data specifications and creating the industry-standard metadata, i.e. the define.xml file. But not as much attention has been focused on the design and use of metadata to describe the transformations and derivations of data. We must expand beyond the focus on the stopping points of data flow and concentrate on the data movement from one stopping point to another. This paper describes the need for an industry standard for map metadata and presents a design that has been implemented with great success. The metadata design principles (described in earlier papers by Gregory Steffens and others) apply to map metadata, such as - 1.) Map metadata should not assume any one data standard, 2.) Well-designed map metadata supports meta-programming. Mapping information is often just put into free-form text fields, but putting this information into structured metadata enables meta-programming of data flows. Map metadata has many advantages, besides automating data flow; such as data flow transparency; standardized ways to exchange specifications about how data flows from one structure to another, as in SDTM to ADaM to IDB; and creating a target metadatabase from a source metadatabase and map metadata, such as creating a study specification from a data standard. Map metadata defines the relationship between the source and target database at dataset, variable, row and value level. Map metadata, along with source and target metadata, can enable automation of the dataflow and create transparent define files, with transformation logic as well as database attribute descriptions. This is the next evolution of metadata and of meta-programming, leading to a true Data Transformation Engine (DTE). Automation at this level is essential to meet the efficiency and quality goals in today’s environment, which requires us to do more work with less staff and to improve quality at the same time.


There is Data Missing, What's Wrong With Your Program?
David Franklin, Quintiles, Manager, Statistical Programming

We have all heard it, or a variation of "There is data missing, what's wrong with your program?" You or your team spent a significant amount of time programming the SDTM to the specifications given to you, or maybe you did the specifications yourself and worked from them. Maybe it is another transfer of the data and running the same set of programs to create the SDTM datasets. In all cases the SAS LOGs appeared to go okay with no ERROR messages found.

Interpretation of specifications is certainly one of the reasons why programs to create SDTM datasets do not work as expected, particularly on different transfers of the raw data, but the more common reason is the data and how the SAS program works with it. This paper looks at a number of WARNING and NOTE messages in detail that may indicate that there are serious issues with your SAS programs, each one of them contributing to your SDTM datasets having problems. Also presented is a small macro that will search though all the logs in a directory and search for these issues.


Common Misconceptions when Implementing SDTM
Jerry Salyers, Accenture, Data Standards Consultant

The SDTM Implementation Guide (SDTM-IG) provides us with the ”bible” as regards the rules or best practices for converting source or operational data to the SDTM standard for submission purposes. Often sponsors turn this conversion or data mapping task over to their CROs, with varying degrees of success. After reviewing SDTM datasets and providing feedback to sponsors and CROs alike, it’s clear that there are a number of areas that cause more confusion and mapping errors, many of which are not found with common data validation tools. We will look at a number of these types of examples, focusing on the need for human intelligence in providing quality legacy data conversion.

Presenter Biographies

David Franklin

David started programming in SAS in 1985 in the land known now as "Middle Earth". After finding the way to surface he worked in Europe and later found his way to New England where he now calls home. Since 2004 David has been editor of TheProgrammersCabin.com which is dedicated to the SAS programmer, providing many tips to help learn new ideas. Until recently David was a consultant working with many Blue Chip companies in the US and Europe but in February he put his hat up and travel bag away and accepted a position at Quintiles Real World Late Phase division as Manager Statistical Programming in Cambridge.


Praveen Garg

Praveen Garg is Director, SAS Programming & Global Strategic Resourcing at ICON Development Solutions. Praveen has more than 14 years’ experience in the Statistical Programming, Data Management, Clinical and Regulatory IT departments. Praveen joined ICON in January 2010 as Sr Manager, Data Management and SAS Programming. Prior to joining ICON, Praveen worked at Eli Lilly and Company managing a team supporting studies across multiple therapeutic areas and supporting regulatory submissions. Praveen has worked primarily as a service provider to Pharmaceutical companies giving him the opportunity to learn from the challenges faced by the industry and devising solutions for them. Praveen has implemented metadata based automation with CDISC standards as the base at multiple organizations. Praveen has track record of leading highly motivated and effective global teams.


Brooke Hinkson

No biography available.


Ragini Khedoe

Ragini started her career as a clinical data coordinator (CDC) at Chiron Vaccines in data management department. Since 7 years she had taken a dual role as lab data specialist and data manager. In this role she managed data for many complex trials as well as processed lab data for all vaccines trials for Chiron, later Novartis. Since 2010 she has been part of a project to move towards CDISC standards as Lab subject matter expert which included legacy data mappings as well as introduction of CDISC in current trials. Currently she is the Head of Clinical Data Repository and Standards within Novartis Vaccines and in this role have a team of Standards managers and clinical data repository managers as well as programmers and testers who manage and maintain the Novartis Vaccines standards.


Carlo Radovsky

Carlo Radovsky has over 25 years of experience with SAS programming and clinical systems in the Biopharmaceutical industry. He has been a member of the CDISC SDS Team since 2008, and has contributed to a range of SDTM sub-teams, as well as participated in cross-team initiatives where he lent his expertise to ADaM and CDASH sub-teams. In 2013, he participated in the PhUSE SDTM Validation Rules Project, focused on improving SDTM validation rules by bringing PhUSE, CDISC, OpenCDISC and FDA together to propose solutions. In 2014, he joined the newly formed CDISC SDTM Validation Rules sub-team, which is tasked with assessing the validation rules as represented in the SDTM-IG 3.2.


Jerry Salyers

With Accenture, Jerry works in Fred Wood’s Data Standards Consulting group providing internal consulting services while also working one-on-one directly with clients in review of legacy-data mapping to SDTM-based datasets.. He also creates and delivers training classes on both CDASH and the SDTM to internal functions, and custom training to external clients


Mihaela Simion

Mihaela is a Metadata Curator in Sanofi, with great focus on the definition, implementation, and governance of metadata standards for clinical trial conduct; she have previously worked in a database programmer/analyst role, setting up databases in various clinical data management systems and programming data listings and operational reports used for internal metrics. She have a BS degree in Nursing and programming background in relational databases and statistics.


Greg Steffens

Greg Steffens has been using SAS for programming and applications development since 1981, primarily in the pharmaceutical and health insurance industries.  He has held job positions ranging from lead technical to director-level management in seven pharmaceutical companies.  He is currently Associate Director of Programming at Novartis.  Greg's experience includes the design and development of metadata and software to automate data definition, data transformation, data validation and FDA submissions.