The results of your work on Technical Measures will be also part of your:
✅ Vendor Risk Assessment: if you are B2B and your customers are structured companies with a GDPR compliance system in place, they will assess your compliance by asking you to fill out checklists and documents to demonstrate your compliance.
✅ Data Processing Agreement: as part of your DPA, you will need to clarify how you process the data that your customers are transferring to you. Described technical measures are part of your contract and obligations with the customer.
✅ Risk Assessment or Data Protection Impact Assessment (DPIA), if you need to carry out them, which aim at documenting your security controls (safeguards) and mitigate your company risks. The Risk Assessment or Data Protection Impact Assessment should be performed before the processing of personal data by companies wishing to demonstrate compliance.
Navigating through multiple templates, copy-pasting, and ambiguous guidelines can be overwhelming. ChecksME is here to help.
Our AI-powered platform guides you step-by-step through everything you need, from technical assessments, to legal questions, to policy creation — all in one place.
Ready to take control of your data protection strategy? Get started today and get data protection done - once and for all.
ℹ️ Development, test, stage and production environments should be separated. In the development, test and stage environment no real data should be used for tests.
Secure coding
ℹ️ Code development should follow approved guidelines or best practices (e.g. OWASP, ISO). Code should be checked for normal and exceptional errors. All developer code should be removed from the application prior to release.
Security testing in development and acceptance
ℹ️ Any application must be thoroughly tested, simulating realistic use and automating tests where possible. Code should be regularly reviewed for bugs.
Automated unit tests should run on the code with >75% code coverage.
A formal acceptance process for the application should be in place.
ℹ️The organization should consider the use of principles of “data security by design” and “privacy by design” when designing an application and its supporting backend. The organization should take care that these principles are applied to outsourced development and suppliers. The application of these principles should be regularly reviewed.
ℹ️ The application should rely on secure session management, as session tokens randomly generated with sufficient entropy. Session tokens should be signed using a secure algorithm. Sessions should terminate after the user logs out, or after a period of time of inactivity.
ℹ️ The application should provide protection to information managed, as a secure authentication method, encryption of data in transit and at a rest. use of digital certificates released by a trusted authority. These topics should be considered since the development or the acquisition process start. VA/PT should be performed following best practices (e.g. OWASP testing guidelines) and documented in a periodic report.
ℹ️ The application should only run on current OS versions that are still supported by the manufacturer. Users should be informed whenever a new version of the application is released. The application should be released only after a phase of test and acceptance, using a set of criteria decided by the organization, keeping in consideration the security of the information.
ℹ️ Medical devices involved in the project must meet additional security controls. This means ensuring you know exactly which components and libraries you use, ensuring communication with any medical device is secure, ensuring you have suitable CE certification, etc.
ℹ️ Superuser, root, privileged or admin should be limited, listed and periodically reviewed. Role escalation should be prevented on all servers and computers.
ℹ️ Authentication should be enforced for all non-public data. Single-Sign-On mechanisms should be excluded. All authentication data should be stored in a secure way (e.g. stored in an encrypted database). Login flows should be tested, ensuring authentication and authorisation are done correctly.
ℹ️ Authentication should be enforced for all non-public data. Single-Sign-On mechanisms should be excluded. All authentication data should be stored in a secure way (e.g. stored in an encrypted database). Multi Factor authentication should be used for first logins, password resets and logins from new devices when accessing systems with personal and sensitive data.
ℹ️ All personal data should be encrypted using state-of-the-art encryption techniques and strong keys. Special category data should be encrypted using the strongest encryption possible. Cryptography should be used when data is transferred.
ℹ️ Data minimisation should be enforced in the application and only allow strictly necessary data to be collected and displayed.
ℹ️ Any changes or updates to data must be propagated throughout the system. Systems should be put in place to ensure integrity of data.
ℹ️ Sensitive and identifying data should be separated and stored separately in different systems (data segregation). The two sets of data should be associated with randomly-generated pseudonyms with high levels of entropy.
ℹ️ All personal data should be automatically deleted, after a defined period of time or when no legal basis authorizes the organization to keep this data. Backups should expire after a certain period of time.
ℹ️ It should be defined the maximum storage time for any data present in the live database and backups.
ℹ️ When the protection of personal data is a must, the organization should hide the data using techniques like data making, pseudonymization or anonymization.
Data masking includes: encryption, variation of some data (e.g., numbers and dates), replacing values with hashes, etc. Pseudonymization is the process of removing personal identifiers from data and replacing those identifiers with placeholder values. Where possible, all personal data should be completely de-identified (anonymized); this is a process of removing personal identifiers, both direct and indirect, that may lead to an individual being identified. Data that can no longer be associated with an individual is not covered by GDPR. Sometimes, companies create a mechanism for storing summary usage data by anonymising it.
ℹ️ Backups should be taken regularly and automatically on all computers and servers that process personal data. Backups should be stored in an encrypted, secure format. Ideally, backups should be stored in a different physical location. Backups of all authentication data and all encryption keys should be encrypted. Backups should be regularly tested through restore tests.
Backups should be encrypted before being transferred. Backup integrity should be checked. The Backup management policy should be created. The maximum storage time for any data present in backups should be defined.
ℹ️ Logging should be enabled on all servers, backend systems and data transactions used by the project. Logs should be collected in a central system. The access to the logging system should be limited to the authorized personnel only. Log integrity should be verified.
ℹ️ All the systems managing data, personal and sensitive information should have their clock aligned via NTP or PTP with a national atomic clock or the GPS system, to have the same date and time to generate precise logs.
ℹ️ Logs should be periodically reviewed to detect anomalous behaviors. All privileged accounts should be logged. Logs should be protected against tampering or editing. No personal data should be stored in logs. Logs should track all accesses to the application's backend and all data operations. Logs should be retained at least 6 months (ideally longer). A Logging and Monitoring policy should be created.