Smarter with Data - BioClin Labs · 2018-09-20 · • Test for fault non functional requirements...
Transcript of Smarter with Data - BioClin Labs · 2018-09-20 · • Test for fault non functional requirements...
1
Smarter with Data
Barry McManus –Empowerment Quality Engineering
e: [email protected]: +44 78 79 81 63 63
Objectives
• Why are we worried about data integrity?• What can we do to preserve data integrity from
an IT perspective?• Data integrity considerations in software
specification, design and testing.• What else do we need to do to protect data
integrity?
2
Data Integrity – What is it?
• Data Integrity is “The extent to which all data are maintained, complete, consistent, and accurate throughout the data lifecycle” – source: MHRA
• Data and records should meet the ALCOA + CCEA principles Attributable, Legible, Contemporaneous, Original and
Accurate Complete, Consistent, Enduring, Available when
needed
• Not limited to electronic data – don’t forget paper records, people and process
3
Data Integrity – Why are We Talking About This ….Again?
• Data integrity is critical to patient safety and product quality
• A number of recent findings globally across the GxP areas where data integrity has been an issue
• Data integrity problems break trust
• “We rely largely on trusting the firm to do the right thing when we are not there” Karen Takahashi – Senior Policy Advisor, FDA/CDER/ Office of Compliance
4
What Can We Do From An IT Perspective To Address This?
Data integrity requires that: Systems are appropriately designed Systems are validated Processes are in place to ensure data quality The content of the data is trustworthy and reliable
5
Validation Monitor Control
But remember, the response should be matched to the risk the system and records pose to patient safety, product
quality, and data integrity.
Data Integrity Through The Electronic System Lifecycle
Validation should ensure that data integrity is designed into the electronic system:
Software code
Infrastructure
Data integrity should be maintained throughout
the production lifecycle of the system
By the users
By the IT department / IT
vendor
Data integrity should be considered and
managed when the system is retired.
6
Identify Systems Within Scope And Ensure They Are Validated
Systems within scope usually include:
• Manufacturing systems• Laboratory Systems• eTMF• Management systems (e.g. CTMS)• Databases (e.g. IRT, PV, etc)• Systems controlling or monitoring
environmental conditions• Automation Systems• Etc
Validate the system to ensure it is fit for purpose
• Create specifications• Assess suppliers• Test against specification• Approval and release• Ensure that interfaces to
equipment / other systems is included in the validation effort to demonstrate data is completely and accurately transferred.
7
Those systems containing records and data which pose the most risk to patient safety and product quality should
be prioritised (e.g. Quality Control systems, adverse event reporting systems, etc).
Requirements Analysis
E.g. Analysis Questions
What does the data look like? (Metadata)
Data entry (interfaces) and by whom (Roles/Permissions)?
What actions are performed on the data?
Data dependencies?
Data storage?
Oversight performed on the data?
Consequence of data loss?
Design (DI) What is it for?• Functional definition of components• Data structure• Control Flow
Describes the development and operational environments e.g. server (hardware) resources, Programming language, DB, common components, performance needs, operational functions, safety considerations, security architecture
Input & Output Processing Storage
Defines information provided by the system user
Defines data formats to be imposed
Defines default values for data items
Defines required files
Defines rules for passing data between components
Define algorithms for data processing
Defines data and destination.
Defines if files are created for future use.
Defines protection to be included by the storage
Design - Data StoreRelational Database: data is structured into tables with built in rules.
Provides an additional data integrity validation layer– designed to enforce data integrity – Enforces referential integrity– Supports Integrity constraints
Large DI Benefit– Metadata is maintained– Atomic transactions– Can Enforce static data Database management tools– Journals and logs– Authorised access– Supports networking, multiple users, error recovery, archiving – Easy to maintain (once skill set is achieved)
Adequate Design is essential(data flow diagrams – Entity relationship diagrams = normalised form)
Implementation
Defensive Coders are pessimists!
Simple Rules
Some Examples:- Initialise all data variables. E.g. int esign_id = 0000;- Filter out incorrect data before it is consumed by or returned by the component. E.g. CHECK
esign_id_role = QP_ROLE;
- Self diagnosing code – (log statements that can be switched on to trouble shoot problems (complex, real time systems))
- Use relational databases to apply integrity constraints that is independent of code. E.g. esign_id must be numeric, 4 digits, positive and within the range 0001 to 9999
- Validate at the compiler level
Unit tests
Code review (automated at compiler)
Functional Testing
Test positive scenarios (~25% effort)
Test negative scenarios (~75% effort):• Test for the wrong type of data • Test for incorrectly formatted data • Test for logically incorrect data (e.g. 29-FEB-2015)• Test for incorrect sequence of events• Test against constraints• Test for fault non functional requirements
Database testing – verify that the relational integrity is built into the database –• Create new SQL statements from the design to challenge the database structure and data
integrity.• Access the DB to amend the constraints to inject errors into the system • Verify database referential integrity (where data in 1 location supports the data in another
location).
Mind-set: Never trust the user – never trust the code – never trust the data
Non Functional Testing
“What happens if? Testing”:• Performance testing• Security Testing; e.g. Security Assessment, Penetration test, Role based testing• Installation, upgrade and roll back testing• Fault tolerance testing• Error and logging testing• Audit trail, review based testing
WARNING = know when to stop destructive testing – be wary of running in a “race to the bottom” of obscure and ever more improbable test scenarios.
Human interactions are weak spots• Ensure SOP training & adherence in end user use, administration and IT support
Operational use: Regular check that the system and operations still works:• Use regression suites from above
Technical Thoughts Human actions (intentional and unintentional) are weak spots• Plan & build in data integrity controls across: Input – Processing – Storage levels• Non-functional requirements focus• Check design deals with data definition, flow and storage
Negative tests “show the software doesn’t do what it is NOT supposed to do”:• Often requires changes in the configuration in order to force negative scenarios• Requires a separate, dedicated test environment.• NEVER use the live system OR live data!!!!• UAT tests should not overly focus on negative testing
Cons Pros
Technical skill set required.Increase up front costs as a result of additional activities. (Cheaper in long term).
Quality really is built in.Easier maintenance and operational use.Reduce risk of regulatory inspectionfindings.Cheaper – less rework at delivery stage
Operations / Live UseEnsure availability of the data - resilience
16
• Do you need the system 24/7?• Are all the users in one location or geographically
dispersed?• Could system unavailability result in loss of data?
Build in resilience based on criticality
and use of the system
• Power supply• Internet• Server clustering• Replication to separate location• Automatic failover• Redundant network switches
Consider:
Operations / Live UseEnsure availability of the data – backup
Ensure the availability of archived data
17
Consider how data will be backed up
To tape
To hard drive
Consider location of
backup
Same location
Off site location
Consider frequency of
backup
Nightly
Weekly
Test backup and restore
Monitor backup to ensure success
Operations/ Live UseEnsure the security of the data
18
Operations / Live UseEnsure the security of the data
19
Manage access at all levels
Develop strong security policy and make sure every employee is aware and understands
Keep anti virus/ security patches up to date for all systems
Separate / partition network into domains with filters between domains
Change Management
20
Whenever making a change to the system consider impact on data integrity
Perform necessary regression testing including the tests described earlier
Ensure that any security settings altered to enable installation of a change are reset
21