PharmaSUG Single-Day Event
Emerging Technologies and Standards
Thursday, October 2, 2025
Many thanks to all those who made the PharmaSUG NC 2025 Single Day Event a success: our sponsors, presenters, organizers, and attendees!
Slides are available at the links below.
Check out the pictures from the SDE!
Ajay Gupta
Daiichi Sankyo
Single-Day Event Co-Chair
Jim Box
SAS
Single-Day Event Co-Chair
Margaret Hung
MLW Consulting
Single-Day Event Co-Chair
Conference Committee:
Ajay Gupta (Daiichi Sankyo), Margaret Hung (MLW Consulting), Jim Box (SAS), Eric Larson (IQVIA), Wu-yen Hung (MLW Consulting), Sampath Madanu (AstraZeneca), Neha Mod (Independent Consultant)
Social Media:
Emily Hansel, Richann Watson
Questions? Contact us!
Registration and Rates
| Registration Type | by Sep 28 | Late/On-site
Registration
Oct 2 |
| SDE | $175 | |
| Student (with valid student ID)* | $175 |
*Academic registration is for full-time students only. To obtain a student registration invitation, please send a copy of your student ID to the conference registrar.
Cancellation Policy
Cancellations can be requested by emailing the registrar at ncsde-registrar@pharmasug.org. Cancellations on or before September 5, 2025 will be refunded minus a $25 fee. Refunds will be issued by the same form of payment received. No refunds will be available after September 5, 2025.
Event Schedule
Thursday, October 2, 2025 | Single-Day Event Presentations
| Presentation Title (click to download slides) | Speaker |
| Key Guidelines, Tricks and Experiences for PMDA and Comparison with FDA and CDE Submissions | Ramesh Potluri, Servier Pharmaceuticals |
| Essential JSON Skills for Clinical Trial Programmers | Elliot Inman, SAS |
| Submitting RWD: Where We Are and Where We Are Going | Jeff Abolafia, Certara |
| Automating SAS Program Header Updates with Macros | Kexin Guan, Merck |
| Prompt, Program, Submit: Generative AI for Faster SDTM, ADaM, and TLFs | Matt Becker, SAS |
| Deciphering Exposure-Response Analysis Datasets: A Programmer’s Perspective for Oncology Studies | Sabari Sundaram, Pfizer Inc. |
| Data Not Missing at Random in PRO Analysis | Gary Leung, Gilead Sciences |
| My DIY Swiss Army Knife of SAS Procedures: A Macro Approach of Forging with My Favorite PROCs | Jason Su, Daiichi Sankyo |
| Upcoming Changes to the SDTM and SDTMIG | Diane Wold, CDISC |
| Your Brain on AI: Evaluating Cognitive Load, Dependency, and Data Integrity in AI-Assisted Clinical Trials | Aditya Gadiko, Emmes |
Presentation Descriptions
Essential JSON Skills for Clinical Trial Programmers
Elliot Inman, SAS
As regulatory agencies adopt data formats like Dataset-JSON, JSON will become a critical aspect of data management for life science programmers and statisticians. This talk will begin with a brief summary of the history JSON and its technical merits for particular use cases, but will focus on practical aspects of JSON using PROC JSON. Knowing how to read, write, and manage data in JSON format is an essential skill for modern clinical trial analysis and reporting.
Prompt, Program, Submit: Generative AI for Faster SDTM, ADaM, and TLFs
Matt Becker, SAS
This session will investigate the practical applications of generative AI to automate and enhance critical clinical programming duties. From the mapping of raw data to SDTM domains to the crafting of ADaM specifications and the generation of boilerplate code or statistical summaries, we will analyze real-world use cases. These examples will demonstrate how to reduce manual effort while ensuring traceability and compliance. Additionally, we will illustrate the integration of these AI-driven procedures into SAS environments to improve productivity without compromising regulatory compliance.
Submitting RWD: Where We Are and Where We Are Going
Jeff Abolafia, Certara
This presentation will examine the current regulatory environment and required standards for submitting RWD. We will look at some of the challenges when submitting RWD using CDISC standards. Next, we discuss the pluses and minuses of submitting data using CDISC as opposed to alternative standards. Then, we look at the pros and cons of submitting data under a single versus a hybrid approach, where each data type is submitted using the standard for which it is best suited. Finally, we provide short term and long recommendations for submitting all types of study data.
Automating SAS Program Header Updates with Macros
Kexin Guan, Merck
Maintaining accurate and comprehensive documentation in pharmaceutical programming is essential for audit trails and program traceability. As clinical programming projects increase in complexity, manually updating program headers becomes a challenging and tedious task. Programmers often face difficulties in tracking multiple input datasets, output files, and macro calls across various SAS programs.
This paper presents a solution as a macro designed to automate the generation and updating of program headers. The macro retrieves essential metadata such as input datasets, macro calls, program outputs, logs, and program flows from existing SAS programs, and seamlessly integrates them into the program header. The macro can process individual files or entire directories, providing flexibility across diverse programming environments. Other key features of this macro include automatic version date updates, preservation of revision history, and selective management of existing header information. By automating the header generation process, this tool reduces manual effort, minimizes errors, and ensures up-to-date information, significantly enhancing documentation efficiency and accuracy in programming workflows.
Upcoming Changes to the SDTM and SDTMIG
Diane Wold, CDISC
SDTM v3.0 and SDTMIG v4.0 will enter public review in November, 2025. SDTM v3.0 is a major versions which introduce the representation of relationships between variables, simplify variable roles, and reflect changes to the SDTMIG and the SENDIG. SDTMIG v4.0 is a major version which introduces new domains to all representation of multiple participations by a subject (e.g., multiple screenings), a new horizontal structure for non-standard variables to replace the supplemental qualifier structure, a new findings-about domain to represent adjudication of events. The variable metadata in domain specification tables has also been revised to include variable definitions, separate metadata into separate columns, and replace assumptions previously included in CDISC Notes by references to either general or domain-specific assumptions.
My DIY Swiss Army Knife of SAS Procedures: A Macro Approach of Forging with My Favorite PROCs
Jason Su, Daiichi Sankyo
Here I take advantage of SAS macro facility and forge these following 4 extremely popular procedures into one Swiss-Army knife (SAK)-styled macro %pfs (the acronyms): PROC PRINT, PROC CONTENTS (not in the acronym), PROC FREQ, and PROC SQL. Controlled by a mode-switch parameter (MODE), the macro can fashion out any one of the 4 procedures in a succinct version supporting popular options in various procedures, such as OBS, FIRSTOBS, WHERE, SHORT, VAR, etc. The new macro has the capacity to carry out my most-frequent jobs from such procedures, such as selectively printing some records from a dataset, displaying its data structure, quickly deriving certain variable frequency, counting certain variables, etc. Based on the spirit, fellow programmers are encouraged to create their own version of the macro %pfs. Upon being called with different modes, the SAK macro can perform any of the procedures, and immediately release the programmers from much of the repetitive syntax-typing work.
Data Not Missing at Random in PRO Analysis
Gary Leung, Gilead Sciences
Mixed-modeling with repeated measures (MMRM) is commonly employed to analyze patient-reported outcomes (PRO) in clinical trials. Such method assumes data is missing at random. In response to FDA comments on PRO analysis of a recent trial, Gilead GHEOR team performed Control-based Pattern Mixture Models (CBPMM) to assess the impact of missing data when data is not missing at random (MNAR). This presentation shows programming methods and results based on MNAR assumption.
Your Brain on AI: Evaluating Cognitive Load, Dependency, and Data Integrity in AI-Assisted Clinical Trials
Aditya Gadiko, Emmes
As AI tools become increasingly embedded in clinical trial workflows, from protocol development and data review to patient engagement and safety monitoring, there is growing interest in understanding how these technologies influence human cognition and data behavior. This talk explores emerging concerns around cognitive load, AI dependency, and data integrity in the context of AI-assisted clinical trials. Drawing insights from recent neuroscience and behavioral studies, we will examine how over-reliance on AI may lead to cognitive offloading, reduced task ownership, and potential declines in critical thinking or protocol compliance. The session will introduce concepts such as cognitive debt, engagement measurement, and the use of linguistic and behavioral markers to assess human-AI collaboration. By translating these findings into the clinical trial domain, the talk aims to highlight practical strategies for identifying and mitigating risks while maximizing the benefits of AI integration. Attendees will leave with a clearer understanding of how to balance automation with human oversight in high-stakes clinical environments.
Key Guidelines, Tricks and Experiences for PMDA and Comparison with FDA and CDE Submissions
Ramesh Potluri, Servier Pharmaceuticals
Submitting documents to regulatory authorities such as the Pharmaceuticals and Medical Devices Agency (PMDA) and the Food and Drug Administration (FDA) is a complex task requiring careful preparation and an in-depth understanding of regulatory requirements. This paper outlines key guidelines and highlights the differences between submissions to PMDA, FDA, and the Center for Drug Evaluation (CDE). Additionally, it provides practical tips and shared experiences to equip programmers and regulatory teams with the necessary knowledge for efficient PMDA submission processes and effective handling of regulatory inquiries.
Deciphering Exposure-Response Analysis Datasets: A Programmer's Perspective for Oncology Studies
Sabari Sundarum, Pfizer Inc.
Exposure-Response (E–R) evaluation is essential in drug development and regulatory reviews by informing decision-making towards optimized trial design, dose and regimen selection, and benefit-risk assessments in both early and late-stage trials. Analyzing the relationship between drug exposure and treatment outcomes using E-R data provides a level of granularity to support the primary evidence of a drug’s safety (identifying negative effects) and/or efficacy (positive effects). The preparation of high quality E-R datasets is a key step in this space, which can get challenging especially in oncology studies which are quite complex and involve multiple factors and mechanisms This paper will explore the role of E-R analysis datasets in regulatory submissions, address key challenges in their creation, and examine the FDA’s guidance on E-R analysis. We will also discuss the development of ADaM standard E-R datasets, present masked dummy data and models to illustrate the practical application of E-R analyses. Ultimately, this paper emphasizes the importance of E-R evaluations in advancing drug development and optimizing therapeutic outcomes.
Presenters









