HHS Public Access Author manuscript Author Manuscript

J Biomed Inform. Author manuscript; available in PMC 2017 August 01. Published in final edited form as: J Biomed Inform. 2016 August ; 62: 232–242. doi:10.1016/j.jbi.2016.07.008.

Developing a Data Element Repository to Support EHR-driven Phenotype Algorithm Authoring and Execution Guoqian Jiang1, Richard C Kiefer1, Luke V Rasmussen2, Harold R Solbrig1, Huan Mo3, Jennifer A Pacheco4, Jie Xu5, Enid Montague5,6, William K Thompson6, Joshua C Denny3,7, Christopher G Chute8, and Jyotishman Pathak9

Author Manuscript

1Department

of Health Sciences Research, Mayo Clinic College of Medicine, Rochester, MN

2Department

of Preventive Medicine, Feinberg School of Medicine, Northwestern University,

Chicago, IL 3Department 4Center

for Genetic Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL

5Feinberg 6School

School of Medicine, Northwestern University, Chicago, IL

of Computing, DePaul University, Chicago, IL

7Department 8School

of Biomedical Informatics, Vanderbilt University, Nashville, TN

of Medicine, Vanderbilt University, Nashville, TN

of Medicine, Johns Hopkins University, Baltimore, MD

Author Manuscript

9Division

of Health Informatics, Weill Cornell Medical College, Cornell University, New York City,

NY

Abstract

Author Manuscript

The Quality Data Model (QDM) is an information model developed by the National Quality Forum for representing electronic health record (EHR)-based electronic clinical quality measures (eCQMs). In conjunction with the HL7 Health Quality Measures Format (HQMF), QDM contains core elements that make it a promising model for representing EHR-driven phenotype algorithms for clinical research. However, the current QDM specification is available only as descriptive documents suitable for human readability and interpretation, but not for machine consumption. The objective of the present study is to develop and evaluate a data element repository (DER) for providing machine-readable QDM data element service APIs to support phenotype algorithm authoring and execution. We used the ISO/IEC 11179 metadata standard to capture the structure

Correspondence to Guoqian Jiang, MD, PhD, Mayo Clinic, Department of Health Sciences Research, 200 First Street, SW, Rochester, MN 55905, USA, Telephone: 507-266-1327, Fax: 507-284-1516, [email protected]. CONTRIBUTORS G.J., R.K., and L.V.R drafted the manuscript; G.J, H.R.S., R.K., L.V.R. and C.G.C, lead data element repository development; G.J., R.K., and L.V.R. led mapping activities; L.V.R. led authoring environment and modularization studies; J.A.P., and H.M. led algorithm modeling studies; W.K.T., J.A.P., L.V.R., R.K., and H.M. led executability and adaptability studies; J.X. and E.M. led environmental scan and usability studies; J.P., G.J., J.C.D., and W.K.T. provided leadership for the project; all authors contributed expertise and edits. Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Jiang et al.

Page 2

Author Manuscript

for each data element, and leverage Semantic Web technologies to facilitate semantic representation of these metadata. We observed there are a number of underspecified areas in the QDM, including the lack of model constraints and pre-defined value sets. We propose a harmonization with the models developed in HL7 Fast Healthcare Interoperability Resources (FHIR) and Clinical Information Modeling Initiatives (CIMI) to enhance the QDM specification and enable the extensibility and better coverage of the DER. We also compared the DER with the existing QDM implementation utilized within the Measure Authoring Tool (MAT) to demonstrate the scalability and extensibility of our DER-based approach.

Graphical Abstract

Author Manuscript Keywords Quality Data Model (QDM); HL7 Fast Healthcare Interoperability Resources (FHIR); Metadata Standards; Semantic Web Technology; Phenotype Algorithms

Author Manuscript

1 INTRODUCTION The creation of phenotype algorithms (i.e., structured selection criteria designed to produce research-quality phenotypes) and the execution of these algorithms against electronic health record (EHR) data to identify patient cohorts have become a common practice in a number of research communities, including the Electronic Medical Records and Genomics (eMERGE) Network,1–3 the Strategic Health Information Technology Advanced Research Project (SHARP),4, 5 the HMO Research Network (HMORN)6, 7 and the National PatientCentered Clinical Research Network (PCORnet).8 However, there exists a limited toolbox enabling the creation of reusable and machine-executable phenotype algorithms which has hampered effective cross-institutional research collaborations9.

Author Manuscript

To address this overarching challenge, we are actively developing a phenotype execution and modeling architecture (PhEMA)10 (http://projectphema.org/) to enable: 1) unambiguous representation of phenotype algorithm logic and semantically rich patient data; 2) effective execution of the phenotype algorithm to generate reproducible and sharable results; and 3) a repository to share phenotypes and execution results for collaborative research. The Quality Data Model (QDM) has been chosen in the PhEMA project as an information model for representing phenotype algorithms. QDM was developed by the National Quality Forum

J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 3

Author Manuscript

(NQF) for representing EHR-based electronic clinical quality measures (eCQMs). In conjunction with the HL7 Health Quality Measures Format (HQMF), QDM contains core elements that make it a promising model for representing phenotype algorithms for clinical research.11, 12 However, currently the QDM specification13 is available only as descriptive text documents, which require human interpretation and implementation for broader use and machine consumption. We believe that a standards-based, semantically annotated rendering of the QDM data elements is critical to support the development of phenotype algorithm authoring and execution applications. The objective of this study is to develop and evaluate a data element repository (DER) that provides standard representations and machine-readable service APIs for data elements extracted from the QDM specification. The system architecture and tooling choices and their evaluations are described in the following sections.

Author Manuscript

2 BACKGROUND 2.1 NQF QDM

Author Manuscript

The NQF QDM describes clinical concepts in a standardized format to enable electronic quality performance measurement in support of opertionalizing the Meaningful Use Program in the United States. It consists of two modules: a data model module and a logic module 6. The data model module is used to represent clinical entities (e.g. diagnoses, laboratory results) and includes the notions of category, datatype, attribute, and value set comprising concept codes from one or more terminologies. A QDM element encapsulates a certain category (e.g., Medication) with an associated datatype (e.g., “Medication, Administered”). Each datatype has a number of associated attributes (e.g., Dose). Figure 1 shows the QDM element structure13. In QDM elements, value sets can be used to define possible codes for the QDM element’s definition or the QDM elements’ attributes. The logic module includes logical, comparison, temporal, and subset operators and functions. These may be combined to constrain combinations of data model entities (e.g. Diagnosis A AND (COUNT(Medication B) > 5)). As of July 2015, the latest release of QDM is version 4.1.2.13 Table 1 shows the definitions and examples of core model elements in the QDM specification. 2.2 ISO/IEC 11179 metadata standard

Author Manuscript

ISO/IEC 11179 is a six-part international standard known as the ISO/IEC 11179 Metadata Registry (MDR) standard.14 Part 3 of the ISO/IEC 11179 standard describes a model for formally associating data model elements with their intended meaning. Figure 2 shows a high-level data description metamodel in the ISO/IEC 11179 specification14. The lower part of the metamodel is a representation layer, which describes how information about observations and values is represented, and the upper part of the metamodel is a conceptual layer, which describes how semantic meaning of the observations and values are represented unambiguously using standard domain ontologies15. A data element is one of the foundational concepts in the specification. ISO/IEC 11179 specifies the relationships and interfaces between data elements, value sets (i.e., enumerated value domains) and standard terminologies.

J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 4

2.3 Semantic Web technologies

Author Manuscript Author Manuscript

The World Wide Web Consortium (W3C) is the main standards body for the World Wide Web.16 Its goal is to develop interoperable technologies and tools as well as specifications and guidelines to realize the full potential of the Web. The Resource Description Framework (RDF)17, Web Ontology Language (OWL)18, and SPARQL19 (a recursive acronym for SPARQL Protocol and RDF Query Language) specifications have all achieved the level of W3C recommendations (the highest level for the W3C standards), and have gained wide acceptance and use. RDF is a general-purpose framework for naming, describing, and organizing resources. A resource is identified and can be referenced by a Uniform Resource Identifier (URI). An RDF statement is represented by a triple format (i.e., subject-predicateobject). A set of RDF statements (i.e., triples) forms a directed graph, which expresses a graph data model. SPARQL is a standard query language on RDF graphs and a SPARQL endpoint can be established to provide standard query services on RDF graphs. OWL provides the standard ontology modeling language to capture formal relationships among entities in a particular domain and enables semantic reasoning and logical inference. These technologies based on the W3C standards provide a solid foundation to define how to model, capture, and disseminate information with the explicit goal of maximizing semantic interoperability.

Author Manuscript

The W3C Semantic Web Health Care and Life Sciences (HCLS) interest group 20 has been established to develop, advocate for, and support the use of Semantic Web technologies across health care, life sciences, clinical research and translational medicine. As a joint collaboration with the W3C HCLS on Clinical Observations Interoperability, a sub-group of the HL7 Implementable Technology Specifications (ITS) group known as the RDF for Semantic Interoperability21 was established to facilitate the use of RDF as a common semantic foundations for healthcare information interoperability. In this study, we use Semantic Web technologies to build our DER infrastructure as detailed in the following sections. 2.4 Related work

Author Manuscript

There have been a number of research efforts to develop and promote metadata standards that help scientists annotate their research data and results22–24. Notably the NIH Big Data to Knowledge (BD2K) bioCADDIE (biomedical and healthCAre Data Discovery and Indexing) project25 and the Center for Expanded Data Annotation and Retrieval (CEDAR) project26 are initiated for the study and development of metadata standards and applications for annotating biomedical datasets to facilitate data discovery, data interpretation, and data reuse. The BD2K community has produced a metadata specification27 to describe the metadata and the structure for datasets. However, these efforts neither focuses on the use cases in EHR-driven phenotype algorithm applications, nor uses the ISO/IEC 11179-based approach to manage their metadata. In the present, we investigated and used the ISO/IEC 11179 standard since this standard defines a common language to describe different aspects of metadata registries and enables the exchange of such metadata between systems that follow this standard. ISO/IEC 11179-based metadata repositories have been successfully implemented in the following projects: 1) the National Cancer Institute (NCI) Cancer Data Standards Registry and Repository (caDSR)28, 29, 2) the Semantic MDR developed by two

J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 5

Author Manuscript

European projects: EHR-enabled clinical research (EHR4CR) and patient safety (SALUS)30, 31, and 3) the CDISC2RDF repository under the Food and Drug Administration (FDA) PhUSE project32. With respect to the use of the data elements in the QDM specification, the Measure Authoring Tool (MAT)33 currently implements a subset of the most recent QDM specification to provide a software tool for the creation of the electronic Clinical Quality Measures (eCQM) in standard formats. Table 2 shows the database tables for the core QDM elements implemented in the MAT. However, the implementation is MAT-specific, and not scalable to reuse in broad clinical research communities.

3 METHODS Author Manuscript

We used the ISO/IEC 11179 metadata standard14 to capture the metadata structure for each data element in the DER. Furthermore, we leveraged Semantic Web technologies16 to facilitate semantic representation of these metadata. In addition, we built a RESTful service application programming interface (API) driven by requirements from the PhEMA phenotype algorithm authoring and execution applications. We observed that there are a number of underspecified areas in the QDM, including the lack of model constraints (e.g., datatype constraints) and pre-defined value sets. We also compared our approach to an existing QDM implementation in the Measure Authoring Tool (MAT) to demonstrate the scalability and extensibility of our DER-based approach. 3.1 SYSTEM ARCHITECTURE

Author Manuscript

We developed a three-layer semantic framework for data element representation and management. Figure 3 shows the system architecture of the framework. In the repository layer, we describe the QDM data model elements and logic elements using the ISO/IEC 11179 standard and represent them using RDF and OWL. In the semantic service layer, we developed a suite of Semantic Web services on top of our metadata repository using the Linked Data API.34 The services include both RDF/SPARQL-based services and simple RESTful services with JSON and XML renderings. In the application layer, the RESTful services are consumed within the phenotype algorithm authoring and execution applications. 3.2 SYSTEM IMPLEMENTATION

Author Manuscript

3.2.1 Creating a QDM reference model schema in OWL—We manually developed a QDM schema using OWL. The schema represents the QDM modules and core model elements as shown in Table 1, which is designed as an extension of the ISO/IEC 11179 standard. In the present study, we used a meta-model schema (MMS) in the OWL/RDF rendering (developed by the FDA PhUSE Semantic Technology project32) which is a subset of the ISO/IEC 11179 Part 3 metadata model. Figure 4 shows a Protégé 5 screenshot illustrating the QDM schema and its instance-elements rendered in the OWL format as an extension of the MMS. In the MMS, the Administered Item is the root class. In the ISO/IEC 11179, an administered item is any item that has registration, governance, and life cycle information associated with it. The MMS also has a number of custom specializations of Context to distinguish the different context levels in a model. The left-hand panel in Figure 4

J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 6

Author Manuscript

shows the high-level classes of the MMS and the QDM schema. The QDM Data Model Element and QDM Logic Element are represented as the subclasses of the top class Administered Item of the MMS. We designed a base URI (Unified Resource Identifier) http://rdf.healthit.gov/qdm/schema# for the QDM schema so that each element in the schema can be uniquely and uniformly identified.

Author Manuscript

Two of the authors (GJ and RK) manually converted the data elements specified in the QDM version 4.1.2 from an NQF specification document into an Excel spreadsheet. We then wrote a Java-based program to parse the spreadsheet and transformed the data elements into the RDF Turtle format35 that represents an RDF graph in a compact textual format. Each QDM element is also asserted as an instance of ISO 11179 Data Element. The panel in the middle of Figure 4 shows a list of data elements for a particular class (e.g., QDMDatatype); the right-hand panel shows the metadata (e.g., label, textual definition, types, and context) for a particular data element (e.g., “Diagnosis, Active”). We also designed another base URI http://rdf.healthit.gov/qdm/element# for uniquely and uniformly identifying each data element instance. In total, 19 instances of the QDM Category, 76 instances of the QDM Datatype, 528 instances of the QDM Attribute and 53 instances of the QDM Logic Element are populated using the QDM schema.

Author Manuscript

3.3.2 Developing the Semantic Web data element services—After the data elements extracted from the QDM specification are represented in RDF, they are loaded into a RDF triple store. In the present study, we used an open source RDF store known as 4store36 for the backend repository. Using the built-in feature of 4store, we established a SPARQL endpoint that provides standard semantic query services. As RESTful services are well supported by software developers, we adopted the principles of Linked Data API and developed simple RESTful services on top of the RDF-based DER. Specifically, we designed the service URI scheme based on the QDM schema (see Table 3), and created a collection of SPARQL queries that retrieve the metadata required by each service URI defined in the scheme. The services provide easy-to-process representations for the data element metadata in XML and JSON formats. Both the SPARQL endpoint and the simple RESTful services APIs are publicly available from the PhEMA project website at http:// projectphema.org/.

Author Manuscript

3.3.3 Interacting with phenotype algorithm authoring and execution applications—We developed the requirements for the DER and services in collaboration with the PhEMA developers, both through discussions about desired aspects of these services and by examining the prototype implementation of PhEMA phenotype algorithm authoring and execution applications. This process was conducted iteratively to improve the utility of the services as the applications matured. Features desired by the application developers included a simple RESTful API that returned JSON or XML representations of the data, which obviated the need to perform SPARQL queries directly. As of December 2015, the PhEMA authoring tool has implemented many of its features using the DER services APIs, including: 1) A QDM model element browser that renders the QDM elements in a hierarchical tree; 2) A feature to highlight metadata for a particular data element, e.g., textual definition; 3) A feature to suggest the value set binding for a particular J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 7

Author Manuscript

data element. The use of the JSON REST API has allowed the authoring tool to rapidly introduce new data elements during development by leveraging consistent, computable definitions. Figure 5 shows (A) the use of QDM data elements and their metadata (e.g., textual definition) in the PhEMA phenotype algorithm authoring application and (B) the use of QDM data elements in constructing executable phenotype algorithms in the Konstanz Information Miner (KNIME) analytics platform (https://www.knime.org/).37

4 FINDINGS AND DISCUSSION Two fundamental service components in the PhEMA are proposed to enable the standard representation of data elements and value sets used in the phenotype algorithms: data model services and terminology services. The data element repository in the present study is a reference implementation of two components in the PhEMA application suite.

Author Manuscript

4.1 Underspecified constraints in QDM Using the ISO/IEC 11179-based representation, we were able to identify a number of underspecified areas in QDM. First, we found that no datatype or cardinality is specified for each data element in QDM, which may cause arbitrary interpretation of a data element when it is used. For example, the datatype of “Diagnosis Active. Severity” should be specified as “Encoded Value”, and that of “Diagnosis Active. Start Datetime” should be specified as “Date”. With such a specification, phenotype algorithm applications could understand that the former element is associated with a list of coded values (i.e., a value set) and the value of the latter is a date. In addition, with a cardinality constraint, the system could know whether the values associated with a data element are required or optional. However, to the best of our knowledge, the QDM specification does not provide such constraints.

Author Manuscript Author Manuscript

To deal with the modeling challenges, we are exploring recent development in the clinical modeling communities including HL7 Fast Healthcare Interoperability Resources (FHIR) and the Clinical Information Modeling Initiatives (CIMI). FHIR is an emerging HL7 standard that leverages existing logical and theoretical models to provide a consistent, easy to implement, and rigorous mechanism for exchanging data between healthcare applications.38 The primary goal of CIMI is to “improve the interoperability of healthcare systems through shared implementable clinical information models”39; CIMI works with FHIR in defining resources. In our studies, we have extended the QDM schema with the notions of FHIR Datatypes and Resources and populated the schema with data elements from FHIR models.40 Through loading the DER with the data elements extracted from FHIR models, on the one hand, the well-defined constraints including data types and value sets could potentially be used to enhance the QDM specification with appropriate mappings. To this end, we also developed a crowdsourcing approach in harmonizing high-level data elements between QDM and HL7 FHIR,41 which provides potential to capture and reuse those constraints defined in the HL7 FHIR models for phenotype applications. In total, 94 data elements from QDM (consisting of 18 QDM Categories and 76 QDM Datatypes) and 98 data elements from FHIR (all FHIR Resources) were extracted for mappings. We received the responses from 7 team members and 206 mapping pairs were created. We used Fleiss’ kappa42 for assessing the reliability of agreement between a fixed number of raters. If

J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 8

Author Manuscript Author Manuscript

the raters are in complete agreement then kappa = 1. If there is no agreement among the raters, then kappa ≤ 0. All QDM data elements had at least one mapping suggested whereas 65 FHIR data elements did not have any mapping. We observed that most of these 65 FHIR data elements are those not commonly used in the phenotype algorithms (data not shown). Only fair agreement (kappa = 0.24) was achieved. The QDM categories Communication, Condition/Diagnosis/Problem, Encounter, Medication, Procedure and Patient Characteristics were in relatively high inter-rater agreement (IRA) (see Table 4). All of these mappings with high IRA are also the top data elements used in the phenotype algorithms43–45. The textual definitions of the data elements, along with multiple factors (e.g., attributes associated with each type, ambiguity in naming within two models), are important for creating correct mappings. For those 65 FHIR data elements that did not have any mappings, most of them belong to non-clinical categories of FHIR Resources, indicating that FHIR covers more granular and infrastructure-related data elements. We consider that a complete and highquality mapping produced through the harmonization will help clinical researchers understand the domain coverage of the two models and ultimately establish the degree of interoperability between QDM-based phenotype algorithms and patient data populated with FHIR models. On the other hand, clinical phenotype applications require comprehensive coverage of data elements to better support a variety of use cases in clinical and translational research. This demands the extensibility of a DER that can support loading data elements from different clinical information models and resources. Thus, we plan to develop a standard interface with the CIMI39 modeling languages, which would enable the DER to load the data elements from HL7 and CIMI-compliant clinical models. 4.2 Lack of pre-defined value sets in QDM

Author Manuscript Author Manuscript

Although there is a notion of Value Set in the QDM schema, the QDM specification does not provide any pre-defined value sets that can be reused by QDM adopters. Each QDM adopter will need to determine their own strategy for value set definition and management. For example, the 2014 Clinical Quality Measures (CQMs)46 use the National Library of Medicine Value Set Authority Center (VSAC)47 as a repository for associated value sets. In the PhEMA project, we adopted the combination of the VSAC (for reuse of existing value sets) and the Object Management Group (OMG) standard Common Terminology Services 2 (CTS2)48 for value set definition and management in support of phenotype algorithm creation and execution. We developed CTS2 value set services for both the VSAC value sets and HL7 FHIR value sets, so that the PhEMA applications could invoke such services through standard APIs. Although the value sets needed for PhEMA that did not already exist in the VSAC could be uploaded to VSAC with appropriate authoring credentials, we consider the CTS2-based value sets service APIs provide additional features that are complementary to the VSAC-based services. First, CTS2 is an OMG standard; the standardbased value sets services are more interoperable across different systems that observe the same standard. Second, CTS2 has a write API that allows users to define their own value sets within PhEMA applications, which is one of the key requirements for the creation of phenotype algorithms. Third, from the perspective of user experiences, it is not straightforward to use the VSAC for publishing value sets given the complexity of the VSAC system; and it is desirable to be able to define value sets within PhEMA applications without directing the users to a separate system.

J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 9

Author Manuscript

In addition, we loaded all the data elements and their associated value sets used in the definition of CQMs into our DER to facilitate the reuse of the VSAC value sets. Figure 6 shows a QDM-based semantic representation of criteria defined in CQM 30 - “Diagnosis, Active: Acute Myocardial Infarction (AMI) (ordinality: Principal)” starts during Occurrence A of “Encounter Inpatient”. As illustrated in Figure 6, the data elements qdm:AcuteMyocardialInfarction (an instance of the QDM datatype qdm:DiagnosisActive), qdm:EncounterInpatient (an instance of the QDM datatype qdm:EncounterPerformed), qdm:Principal (an instance of the QDM attribute qdm:DiagnosisActive:Ordinality) are linked with their corresponding value sets using the predicate “mms:dataElementValueDomain” which is defined in the meta-model schema based on the ISO/IEC11179. With such semantic representation, the PhEMA authoring applications, for example, could provide a feature that recommends value sets to the users who choose to use a CQM data element to define their phenotype algorithms.

Author Manuscript

4.3 Comparison with the MAT-based implementation As previously mentioned, the MAT33 implements a subset of the most recent QDM specification and provides a web-based tool that allows measure developers to author eCQMs using the QDM. The MAT backend is based on a relational database schema, and the QDM model elements are implemented in a number of database tables as shown in Table 2.

Author Manuscript Author Manuscript

In contrast, we implemented the QDM specification using a DER-based approach leveraging standards-based representations and Semantic Web technologies. First, the DER-based QDM implementation is application-independent, while the MAT-based QDM implementation tightly couples QDM elements to the MAT application. Our DER is developed as a module of the PhEMA infrastructure, which provides semantic and REST service APIs that may be consumed by any applications that use QDM, including (but not limited to) PhEMA authoring and execution tools. For example, the DER is currently being adopted by a Big Data to Knowledge (BD2K) bioCADDIE pilot project49 to build tools for indexing clinical research datasets using HL7 FHIR. Here, the DER-based approach with FHIR data element services will be used in building indexing services. Second, the DERbased approach uses ISO/IEC 11179 standard to represent the metadata of the data elements from QDM. We found that the textual definitions of the QDM data elements, which are important to help improve the users’ experience when choosing the QDM data elements for building their phenotype algorithms, are not captured in the MAT. Also, as previously described, by converting the textual description to a standard representation we were able to identify underspecified areas such as missing constraints in the QDM specification. In addition, the ISO/IEC 11179 specifies a semantically precise structure for data elements and enables a mechanism to identify two data elements from different models with the same intended meaning, which would facilitate semantic harmonization of the data elements between different models, e.g., the harmonization of the data elements between QDM and FHIR. In addition, we leveraged the CTS2 standard to enable machine-readable value set services, which provides many of metadata for the value set management that are not covered by MAT. For example, the MAT Code System table (see Table 2) does not have a field for the Code System Version whereas the CTS2 value set services do cover this

J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 10

Author Manuscript

metadata. This is important for handling different versions of a code system used. Third, the DER-based approach leverages scalable Semantic Web technologies. The RDF-based graph data model allows incremental data integration from disparate data sources and provide an agile approach for dynamic aggregation of large knowledge resources and datasets. As demonstrated, our DER has been extended to load the data elements from other information models such as HL7 FHIR, and other sources such as eCQMs. 4.4 Limitations of the study

Author Manuscript Author Manuscript

There are a number of limitations. First, we used a subset of ISO/IEC 11179 metadata standard, i.e. a metamodel schema (MMS) adopted by the CDISC2RDF project. Although the MMS contains core constructs of the metadata standard and has proved very useful in representing the metadata of the QDM data elements in a lightweight manner, the full potential of the ISO/IEC 11179 standard has yet to be explored. For example, our DER has not implemented a key feature of the ISO/IEC 11179 that enables the definition of the intended meaning of a data element or a permissible value using standard vocabularies. To explore the full specification of the ISO/IEC 11179 standard is beyond the scope of this study but will be one of our future study areas. Second, the service URI scheme designed for the REST service API is based on the notions of QDM elements (e.g., category, datatype, and attribute). This works well for exposing metadata of the QDM data elements but may cause confusion if we use these notions to represent metadata of the data elements extracted from other data models (e.g., FHIR). This demands a generic framework for building metadata services when we load our DER with data elements from a variety of data models. Here, we believe that ISO/IEC 11179 would provide such a generic framework to enable the creation of standard metadata service URI scheme, and we are working on defining such a metadata service URI scheme using the ISO/IEC 11179 standard. Third, the mappings between QDM and FHIR created using a crowdsourcing approach were still preliminary and only fair agreement (kappa = 0.24) was achieved. We plan to look into those mappings with high agreement among reviewers at first while a community-based review mechanism would be needed to incrementally build a collection of reliable mappings between QDM and FHIR. Finally, the DER and its metadata services are mainly used to support the PhEMA phenotype algorithm applications. To enable a robust metadata infrastructure, it is desirable to collect the requirements from research applications in broader clinical research communities. In addition to FHIR, we are also closely collaborating with a number of such communities as BD2K, eMERGE, PCORnet and OHDSI (Observational Health Data Sciences and Informatics)50 to identify their needs in metadata management to enhance and expand the capabilities of our DER infrastructure.

Author Manuscript

5 CONCLUSION We developed a data element repository that provides a standards-based semantic infrastructure to enable machine-readable QDM data element services in support of EHRdriven phenotype algorithm authoring and execution. Using the ISO/IEC 11179-based representation, we were able to identify a number of underspecified areas in QDM. Compared with an existing MAT-based QDM implementation, we demonstrated the scalability and extensibility of our DER-based approach. In the future, we plan to develop a

J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 11

Author Manuscript

CIMI-based interface to enhance the extensibility of the DER through incorporating data elements extracted from external clinical data and information models (e.g., HL7 FHIR).

Acknowledgments The manuscript is an expanded version of a podium abstract presented in the AMIA Clinical Research Informatics (CRI) 2015 conference. This work has been supported in part by funding from PhEMA (R01 GM105688), eMERGE (U01 HG006379, U01 HG006378 and U01 HG006388), and caCDE-QA (1U01CA180940-01A1).

ABBREVIATIONS

Author Manuscript Author Manuscript Author Manuscript

QDM

Quality Data Model

EHR

Electronic Health Record

eCQMs

Electronic Clinical Quality Measures

HQMF

Health Quality Measures Format

DER

Data Element Repository

FHIR

Fast Healthcare Interoperability Resources

CIMI

Clinical Information Modeling Initiatives

MAT

Measure Authoring Tool

eMERGE

Electronic Medical Records and Genomics

SHARP

Strategic Health Information Technology Advanced Research Project

HMORN

HMO Research Network

PCORnet

National Patient-Centered Clinical Research Network

PhEMA

Phenotype Execution and Modeling Architecture

NQF

National Quality Forum

MDR

Metadata Registry

W3C

World Wide Web Consortium

RDF

Resource Description Framework

OWL

Web Ontology Language

SPARQL

SPARQL Protocol and RDF Query Language

URI

Uniform Resource Identifier

HCLS

Semantic Web Health Care and Life Sciences

ITS

Implementable Technology Specifications

BD2K

Big Data to Knowledge

J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 12

bioCADDIE biomedical and healthCAre Data Discovery and Indexing

Author Manuscript Author Manuscript

CEDAR

Center for Expanded Data Annotation and Retrieval

NCI

National Cancer Institute

caDSR

Cancer Data Standards Registry and Repository

API

Application Programming Interface

MMS

Meta-Model Schema

KNIME

Konstanz Information Miner

VSAC

Value Set Authority Center

OMG

Object Management Group

CTS2

Common Terminology Services 2

AMI

Acute Myocardial Infarction

IRA

Inter-Rater Agreement

References

Author Manuscript Author Manuscript

1. McCarty CA, Chisholm RL, Chute CG, Kullo IJ, Jarvik GP, Larson EB, et al. The eMERGE Network: a consortium of biorepositories linked to electronic medical records data for conducting genomic studies. BMC medical genomics. 2011; 4:13. Epub 2011/01/29. [PubMed: 21269473] 2. Gottesman O, Kuivaniemi H, Tromp G, Faucett WA, Li R, Manolio TA, et al. The Electronic Medical Records and Genomics (eMERGE) Network: past, present, and future. Genetics in medicine: official journal of the American College of Medical Genetics. 2013; 15(10):761–71. Epub 2013/06/08. [PubMed: 23743551] 3. Chute CG, Ullman-Cullere M, Wood GM, Lin SM, He M, Pathak J. Some experiences and opportunities for big data in translational research. Genetics in medicine: official journal of the American College of Medical Genetics. 2013; 15(10):802–9. Epub 2013/09/07. [PubMed: 24008998] 4. Chute CG, Pathak J, Savova GK, Bailey KR, Schor MI, Hart LA, et al. The SHARPn project on secondary use of Electronic Medical Record data: progress, plans, and possibilities. AMIA Annual Symposium proceedings/AMIA Symposium AMIA Symposium. 2011; 2011:248–56. Epub 2011/12/24. [PubMed: 22195076] 5. Pathak J, Bailey KR, Beebe CE, Bethard S, Carrell DC, Chen PJ, et al. Normalization and standardization of electronic health records for high-throughput phenotyping: the SHARPn consortium. Journal of the American Medical Informatics Association: JAMIA. 2013; 20(e2):e341– 8. Epub 2013/11/06. [PubMed: 24190931] 6. Thompson EE, Steiner JF. Embedded research to improve health: the 20th annual HMO Research Network conference, March 31–April 3, 2014, Phoenix, Arizona. Clinical medicine & research. 2014; 12(1–2):73–6. Epub 2014/10/30. [PubMed: 25352609] 7. Ross TR, Ng D, Brown JS, Pardee R, Hornbrook MC, Hart G, et al. The HMO Research Network Virtual Data Warehouse: A Public Data Model to Support Collaboration. EGEMS (Wash DC). 2014; 2(1):1049. Epub 2014/01/01. [PubMed: 25848584] 8. Daugherty SE, Wahba S, Fleurence R. Patient-powered research networks: building capacity for conducting patient-centered clinical outcomes research. Journal of the American Medical Informatics Association: JAMIA. 2014; 21(4):583–6. Epub 2014/05/14. [PubMed: 24821741]

J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 13

Author Manuscript Author Manuscript Author Manuscript Author Manuscript

9. Mo H, Thompson WK, Rasmussen LV, Pacheco JA, Jiang G, Kiefer R, et al. Desiderata for Computable Representations of Electronic Health Records-Driven Phenotype Algorithms. J Am Med Inform Assoc. 2015 (in press). 10. Rasmussen LV, Kiefer RC, Mo H, Speltz P, Thompson WK, Jiang G, et al. A Modular Architecture for Electronic Health Record-Driven Phenotyping. AMIA Joint Summits on Translational Science proceedings AMIA Summit on Translational Science. 2015; 2015:147–51. Epub 2015/08/26. 11. Li D, Endle CM, Murthy S, Stancl C, Suesse D, Sottara D, et al. Modeling and executing electronic health records driven phenotyping algorithms using the NQF Quality Data Model and JBoss(R) Drools Engine. AMIA Annual Symposium proceedings/AMIA Symposium AMIA Symposium. 2012; 2012:532–41. Epub 2013/01/11. [PubMed: 23304325] 12. Thompson WK, Rasmussen LV, Pacheco JA, Peissig PL, Denny JC, Kho AN, et al. An evaluation of the NQF Quality Data Model for representing Electronic Health Record driven phenotyping algorithms. AMIA Annual Symposium proceedings/AMIA Symposium AMIA Symposium. 2012; 2012:911–20. Epub 2013/01/11. [PubMed: 23304366] 13. Quality Data Model (QDM) Specification. [July 14, 2015] 2015. Available from: https:// ecqi.healthit.gov/qdm 14. [July 14, 2015] ISO/IEC 11179, Information Technology -- Metadata registries (MDR). 2015. Available from: http://metadata-standards.org/11179/ 15. Davies J, Gibbons J, Harris S, Crichton C. The CancerGrid experience: Metadata-based modeldriven engineering for clinical trials. Sci Comput Program. 2014; 89:126–43. 16. [July 14, 2015] W3C Standards. 2015. Available from: http://www.w3.org/standards/ 17. RDF. [December 24, 2015] 2015. Available from: http://www.w3.org/RDF/ 18. [December 24, 2015] OWL Web Ontology Language. 2015. Available from: http:// www.w3.org/TR/owl-guide/ 19. [December 24, 2015] SPARQL Query Language for RDF. 2015. Available from: http:// www.w3.org/TR/rdf-sparql-query/ 20. [December 24, 2015] The W3C Semantic Web Health Care and Life Sciences (HCLS). 2015. Available from: http://www.w3.org/blog/hcls/ 21. The RDF for Semantic Interoperability group. [December 24, 2015] 2015. Available from: http:// wiki.hl7.org/index.php?title=RDF_for_Semantic_Interoperability 22. FORCE11. [April 28, 2016] The future of research communications and e-scholarship. 2016. Available from: World Wide Web Consortium ( 23. Biosharing. [April 28, 2016] 2016. Available from: https://biosharing.org/ 24. Sansone SA, Rocca-Serra P, Field D, Maguire E, Taylor C, Hofmann O, et al. Toward interoperable bioscience data. Nature genetics. 2012; 44(2):121–6. Epub 2012/01/28. [PubMed: 22281772] 25. NIH BD2K bioCADDIE project. [April 28, 2016] 2016. Available from: https://biocaddie.org/ 26. Musen MA, Bean CA, Cheung KH, Dumontier M, Durante KA, Gevaert O, et al. The center for expanded data annotation and retrieval. Journal of the American Medical Informatics Association: JAMIA. 2015; 22(6):1148–52. Epub 2015/06/27. [PubMed: 26112029] 27. [April 28, 2016] BD2K bioCADDIE WG3: Metadata Specification. 2016. Available from: https:// biocaddie.org/group/working-group/working-group-3-descriptive-metadata-datasets 28. Komatsoulis GA, Warzel DB, Hartel FW, Shanbhag K, Chilukuri R, Fragoso G, et al. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability. Journal of biomedical informatics. 2008; 41(1):106–23. Epub 2007/05/22. [PubMed: 17512259] 29. NCI caDSR Wiki. [December 24, 2015] 2015. Available from: https://wiki.nci.nih.gov/display/ caDSR/caDSR+Wiki 30. Daniel C, Sinaci A, Ouagne D, Sadou E, Declerck G, Kalra D, et al. Standard-based EHR-enabled applications for clinical research and patient safety: CDISC - IHE QRPH - EHR4CR & SALUS collaboration. AMIA Joint Summits on Translational Science proceedings AMIA Summit on Translational Science. 2014; 2014:19–25. Epub 2014/01/01.

J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 14

Author Manuscript Author Manuscript Author Manuscript Author Manuscript

31. Doods J, Bache R, McGilchrist M, Daniel C, Dugas M, Fritz F. Piloting the EHR4CR feasibility platform across Europe. Methods of information in medicine. 2014; 53(4):264–8. Epub 2014/06/24. [PubMed: 24954881] 32. PhUSE Semantic Technology Working Group CDISC Standards. [August 3, 2015] 2015. Available from: https://github.com/phuse-org/rdf.cdisc.org 33. Measure Authoring Tool. [December 24, 2015] 2015. Available from: https://http:// www.emeasuretool.cms.gov/ 34. Linked Data API. [July 14, 2015] 2015. Available from: https://code.google.com/p/linked-data-api/ 35. [December 28, 2015] RDF Turtle: Terse RDF Triple Language. 2015. Available from: http:// www.w3.org/TR/turtle/ 36. The 4store. [July 14, 2015] 2015. Available from: https://github.com/garlik/4store 37. Mo H, Pacheco JA, Rasmussen LV, Speltz P, Pathak J, Denny JC, et al. A Prototype for Executable and Portable Electronic Clinical Quality Measures Using the KNIME Analytics Platform. AMIA Joint Summits on Translational Science proceedings AMIA Summit on Translational Science. 2015; 2015:127–31. Epub 2015/08/26. 38. [July 14, 2015] HL7 FHIR DTSU 2. 2015. Available from: http://www.hl7.org/implement/ standards/fhir/2015May/index.html 39. [July 14, 2015] Clinical Information Modeling Initiative. 2015. Available from: http:// www.opencimi.org/ 40. Jiang G, Solbrig HR, Kiefer R, Rasmussen LV, Mo H, Speltz P, et al. A Standards-based Semantic Metadata Repository to Support EHR-driven Phenotype Authoring and Execution. Studies in health technology and informatics. 2015; 216:1098. Epub 2015/08/12. [PubMed: 26262397] 41. Jiang G, Solbrig HR, Kiefer R, Rasmussen LV, Mo H, Pacheco JA, et al.Harmonization of Quality Data Model with HL7 FHIR to Support EHR-driven Phenotype Authoring and Execution: A Pilot Study. AMIA Annu Symp Proc. 2015 Nov 13.2015 (in press). 42. [May 17, 2016] Fleiss’ kappa. 2016. Available from: https://en.wikipedia.org/wiki/Fleiss'_kappa 43. Conway M, Berg RL, Carrell D, Denny JC, Kho AN, Kullo IJ, et al. Analyzing the heterogeneity and complexity of Electronic Health Record oriented phenotyping algorithms. AMIA Annual Symposium proceedings/AMIA Symposium AMIA Symposium. 2011; 2011:274–83. Epub 2011/12/24. [PubMed: 22195079] 44. Pathak J, Kho AN, Denny JC. Electronic health records-driven phenotyping: challenges, recent advances, and perspectives. Journal of the American Medical Informatics Association: JAMIA. 2013; 20(e2):e206–11. Epub 2013/12/05. [PubMed: 24302669] 45. Kho AN, Pacheco JA, Peissig PL, Rasmussen L, Newton KM, Weston N, et al. Electronic medical records for genetic research: results of the eMERGE consortium. Science translational medicine. 2011; 3(79):79re1. Epub 2011/04/22. 46. [July 14, 2015] Clinical Quality Measures. 2015. Available from: http://www.healthit.gov/policyresearchers-implementers/clinical-quality-measures 47. NLM Value Set Authority Center (VSAC). [July 14, 2015] 2015. Available from: https:// vsac.nlm.nih.gov/ 48. [July 14, 2015] Common Terminology Services 2 (CTS2) Specfication. 2015. Available from: http://www.omg.org/spec/CTS2/1.1/ 49. [December 24, 2015] bioCADDIE Pilot Project. 2015. Available from: https://biocaddie.org/group/ pilot-project/pilot-project-4-2-%E2%80%93-feasibility-study-indexing-clinical-research-datausing-hl7 50. OHDSI (Observational Health Data Sciences and Informatics). [May 17, 2016] 2016. Available from: http://www.ohdsi.org/

J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 15

Author Manuscript

Highlights •

A machine-readable metadata repository for Quality Data Model (QDM) is developed.



A number of underspecified areas in the QDM are observed.



A harmonization of QDM with other clinical models developed in HL7 is proposed.



The QDM implementation in the Measure Authoring Tool (MAT) is compared.

Author Manuscript Author Manuscript Author Manuscript J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 16

Author Manuscript Author Manuscript Author Manuscript

Figure 1.

Quality Data Model (QDM) Element Structure. (Reproduced using the source from the QDM Element Specification13).

Author Manuscript J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 17

Author Manuscript Author Manuscript

Figure 2.

A high-level data description meta-model specified in the ISO/IEC 11179 (source from the ISO/IEC11179 specification document14)

Author Manuscript Author Manuscript J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 18

Author Manuscript Author Manuscript

Figure 3.

System Architecture.

Author Manuscript Author Manuscript J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 19

Author Manuscript Author Manuscript Figure 4.

Author Manuscript

A Protégé 5 screenshot illustrating the QDM schema and its instance-elements rendered in the OWL format. The left-hand panel shows the ISO/IEC 11179 meta-model schema and its QDM schema extension; the panel in the middle shows a list of data elements for a particular class (e.g., QDMDatatype); the right-hand panel shows the metadata (e.g., label, textual definition, types, and context) for a particular data element (e.g., Diagnosis, Active).

Author Manuscript J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 20

Author Manuscript Author Manuscript Author Manuscript Figure 5.

Author Manuscript

(A) This is a screenshot of the PhEMA phenotype algorithm authoring application illustrating the use of QDM data elements and their metadata. In the left panel it shows a QDM model element browser that renders the QDM elements in a hierarchical tree, and a feature to highlight metadata for a particular data element, e.g., textual definition. (B) This is a screenshot of the PhEMA phenotype algorithm execution platform based on the KNIME illustrating the use of QDM data elements in constructing executable phenotype algorithms platform.

J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 21

Author Manuscript Author Manuscript Figure 6.

A QDM-based semantic representation of an example criteria - “Diagnosis, Active: Acute Myocardial Infarction (AMI) (ordinality: Principal)” starts during Occurrence A of “Encounter Inpatient”.

Author Manuscript Author Manuscript J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 22

Table 1

Author Manuscript

The QDM module, and core model element definitions and examples.

Author Manuscript Author Manuscript

Module

Core Model Element

Definition

Example

Data Model Module

Category

A category consists of a single clinical concept identified by a value set. A category is the highest level of definition for a QDM element. The QDM currently contains 19 categories.

Medication, Procedure, Condition/ Diagnosis/Problem, Communication, and Encounter

Datatype

A datatype is the context in which each category is used to describe a part of the clinical care process.

‘Medication, Active’ and ‘Medication, Administered’ as applied to the Medication category.

Attribute

An attribute provides specific detail about a QDM element. QDM elements have two types of attributes, datatype - specific and data flow attributes

‘Dose’, ‘Frequency’, ‘Route’, ‘Start Datetime’ and ‘Stop Datetime’, for the datatype ‘Mediation Active’

Value Sets

A value set is a set of values that contain specific codes derived from a particular code system. Value sets are used to define the set of codes that can possibly be found in a patient record for a particular concept.

Laboratory Test, Performed: “value set A” (result: “value set B”)

Attribute Filters

Attribute filters can be applied to QDM elements to further restrict the set of events that are returned.

Filter by existence of a recorded value, filter by value set or filter by date.

Functions

Min, Max, Median, Average, Count, Sum, Age At, DateDiff, and TimeDiff.

Min >= 120 mmHg of: “Physical Exam, Performed: Systolic Blood Pressure (result)” during “Measurement Period”

Subset Operators

First, Second, Third, Fourth, Fifth, Most Recent, Intersection Of, Union Of, Satisfies Any, and Satisfies All.

Intersection of: “Encounter, Performed: Office Visit” during “Measurement Period” “Encounter, Performed: Office Visit” ends before start of “Diagnosis, Active: Diabetes”

Logic Operators

And, Or, Not.

AND: “Encounter: Hospital Inpatient” AND: “Physical Exam, Performed: Weight Measurement” during “Measurement Period”

Comparison Operators

Equal to, Less Than, Less Than Or Equal To, Greater Than, and Greater Than Or Equal To.

“Encounter: Hospital Inpatient (duration > 120 day(s))”

General Relationship Operators

Fulfills

“Communication: Provider to Provider: Consult Note” fulfills “Intervention, Order: Referral”

Temporal Operators

‘Starts Before Start Of’, ‘Starts After Start Of’, ‘Starts Before End Of’, ‘Starts Concurrent With’, ‘Starts During’, ‘Ends During’, ‘Concurrent With’, ‘During’, ‘Overlaps’, etc.

“Diagnosis, Active: Diabetes” starts during “Encounter: Hospital Inpatient”

Logic Module

Author Manuscript J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 23

Table 2

Author Manuscript

The database tables for the core QDM elements implemented in the MAT Table Name

Table Field

CATEGORY

CATEGORY_ID DESCRIPTION ABBREVIATION

DATA_TYPE

DATA_TYPE_ID DESCRIPTION CATEGORY_ID

QDM_ATTRIBUTES

ID NAME DATA_TYPE_ID

Author Manuscript

QDM_ATTRIBUTE_TYPE ATTRIBUTE_DETAILS

ATTRIBUTE_DETAILS_ID ATTR_NAME CODE CODE_SYSTEM CODE_SYSTEM_NAME MODE TYPE_CODE

CODE_SYSTEM

CODE_SYSTEM_ID DESCRIPTION CATEGORY_ID

Author Manuscript

ABBREVIATION CODE

CODE_ID CODE DESCRIPTION CODE_LIST_ID

OPERATOR

ID LONG_NAME SHORT_NAME FK_OPERATOR_TYPE

OPERATOR_TYPE

ID NAME

Author Manuscript J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 24

Table 3

Author Manuscript

The service URI scheme designed for the REST service API.

Author Manuscript Author Manuscript

REST Service URI Scheme

Description

/qdm/categories

get all QDM categories

/qdm/category/[category]

get a particular QDM category based on name

/qdm/datatypes

get all QDM datatypes

/qdm/datatype/[datatype]

get a particular QDM datatype based on name

/qdm/category/[category]/datatypes

get all QDM datatypes for a particular category

/qdm/category/[category/datatype/[datatype]

get a particular QDM datatype for a particular category

/qdm/attributes

get all QDM attributes

/qdm/attribute/[attribute]

get a particular QDM attribute based on name

/qdm/datatype/[datatype]/attributes

get all QDM attributes for a particular datatype

/qdm/datatype/[datatype]/attribute/[attribute]

get a particular QDM attribute for a particular datatype

/qdm/functions

get all QDM functions

/qdm/function/[function]

get a particular QDM function

/qdm/comparisonOperators

get all QDM comparison operators

/qdm/comparisonOperator/[comparisonOperator]

get a particular QDM comparison operator based on name

/qdm/logicalOperators

get all QDM logical operators

/qdm/logicalOperator/[logicalOperator]

get a particular QDM logical operator

/qdm/relationshipOperators

get all QDM relationship operators

/qdm/relationshipOperator/[relationshipOperator]

get a particular QDM relationship operator

/qdm/subsetOperators

get all QDM subset operators

/qdm/subsetOperator/[subsetOperator]

get a particular QDM subset operator

/qdm/temporalOperators

get all QDM temporal operators

/qdm/temporalOperator/[temporalOperator]

get a particular QDM temporal operator

Author Manuscript J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Jiang et al.

Page 25

Table 4

Author Manuscript

The mappings between QDM and FHIR that are highly agreed among the reviewers (n=7)

Author Manuscript Author Manuscript

QDM data element

FHIR data element

Number of reviewers with agreement

Device

Device

7

Diagnosis_ Active

Condition

7

Encounter

Encounter

7

Medication

Medication

7

Procedure

Procedure

7

Communication

Communication

6

Communication_ From Patient to Provider

Communication

6

Communication_ From Provider to Patient

Communication

6

Communication_ From Provider to Provider

Communication

6

Condition/Diagnosis/Problem

Condition

6

Diagnosis_ Family History

Family History

6

Diagnosis_ Inactive

Condition

6

Diagnosis_ Resolved

Condition

6

Diagnostic Study_ Order

Diagnostic Order

6

Encounter_ Active

Encounter

6

Encounter_ Performed

Encounter

6

Medication_ Administered

Medication Administration

6

Medication_ Dispensed

Medication Dispense

6

Medication_ Order

Medication Prescription

6

Patient Characteristic

Patient

6

Substance

Substance

6

Author Manuscript J Biomed Inform. Author manuscript; available in PMC 2017 August 01.

Developing a data element repository to support EHR-driven phenotype algorithm authoring and execution.

The Quality Data Model (QDM) is an information model developed by the National Quality Forum for representing electronic health record (EHR)-based ele...
2MB Sizes 0 Downloads 7 Views