I recently hosted an Ed TALK with Erika Fisher, Elena Elkina, and Larry Ponemon, and we discussed the value of Privacy Impact Assessments (PIAs). I wasn’t too familiar with the topic but found it interesting, given its similarities with security approaches.
PIAs are essentially an analysis of how personal information is collected, used, shared, and maintained. It is not a new concept increasing in popularity given the heightened attention to privacy issues and recent legislation. PIAs benefit the organization itself and its customers. The organization reviews its data handling processes to determine how they might affect or compromise the privacy of individuals whose data it holds, collects, or processes. PIAs have been conducted by various sub-agencies of the U.S. Department of Homeland Security (DHS), and methods to conduct them have been standardized. Here is a helpful template from DHS.
A PIA is typically designed to accomplish three main goals:
- Ensure conformance with applicable legal, regulatory, and policy requirements
- Identify and evaluate the risks of privacy breaches or other incidents
- Identify appropriate privacy controls to mitigate unacceptable risks
The purpose is to demonstrate that privacy protections have been considered and implemented throughout the development lifecycle of a system or product. PIAs are conducted when an organization has a new business process it wants to implement, is acquiring a new company, or has a new product to launch. One may also want to do a PIA when there are changes to existing processes, products, or systems. For example, expansion into a new country, region, or state – the organization should ensure that products have appropriate privacy protections in place. The PIA is a living tool (similar to a threat model), not merely a one-off exercise.
Organizations are leveraging PIAs to evaluate things like competitive advantages, contributing to calculations around product value and cost-effectiveness in design. There are other benefits, including risk-based decision-making, safe harbor protection, and building trust with clients.
While most PIAs are run by privacy or legal team, many companies instantiate privacy experts or champions in different teams (similar to “security champions”). Depending on the product or process, that champion might lead a PIA in collaboration with other resources, e.g., the legal or InfoSec team, because it’s a collaboration effort. Most privacy regulations intend to protect the individual’s rights; however, the intent for businesses is to bring awareness and thoughtfulness to the minds of the people designing the products that have control of privacy data.
The Ed TALKS experts agreed that we, as an industry, need to push education amongst internal teams. It’s easy for an individual/consumer to understand what personal information is and whether or not we want to share that data with others. Things get murkier when delivering products and services as a business, especially with the massive adoption of cloud infrastructure. Ideally, privacy and PIAs are built into an organization’s risk management process and standards. Most teams that build products/services must check boxes along the way before they’re allowed to put things into production – for functionality, performance, accessibility, security, and other quality issues. Including privacy as part of that should be a natural extension.
Privacy mandates like the new California Consumer Privacy Act (CCPA) are increasingly demanding PIAs, so it might be a good time to get started if you haven’t already. General Data Protection Regulation (GDPR) has something similar called DPIA or data protection impact assessment. DPIA is all about identifying and minimizing the risk associated with the processing of personal data. There are very specific reasons when a DPIA needs to be conducted according to European regulations. A couple of examples are profiling or automatic decision-making processes and systematic monitoring or data processing on a large scale.
From an operational perspective, the key is to make PIAs agile, executable, and as lightweight as possible (while still capturing the intent.):
- Project Initiation: Decide if you want to do a Preliminary (light touch) PIA and then complete a total PIA once the preliminary is entirely underway. An initial PIA may uncover significant issues that require a product or service to be re-architected, so there is no need to conduct a detailed PIA if you discover earlier there’s going to be a lot of re-work required.
- Data Flow Analysis: Map out how your business handles personal information. Identify clusters of personal information and create a diagram of how that data flows through the organization. Mind maps and flow charts are helpful tools here.
- Privacy Analysis: You may wish to have personnel involved with the movement of personal information complete a privacy analysis questionnaire, followed by interviews and discussions of privacy issues and implications.
- Privacy Impact Assessment Report: The privacy risks and potential implications are documented, as well as a discussion of possible efforts that could be made to mitigate or remedy the risks.
When talking about privacy, Erika warned that if you don’t put privacy into context for different teams and build champions in other groups, “you will never survive through the scale of what you’re trying to build nor the pace of it.” She offered some strategic advice coupled with tactical examples of how to she got engagement from her teams at Atlassian. She worked with various functional teams (accounting, engineering, IT operations, customer support, etc.) to develop privacy champions on those teams. These folks helped distribute the load of PIAs, so the burden wasn’t entirely on the privacy & legal teams. The champions also helped frame privacy concerns into the context and language of their specific functional group. Accountants were talking with accountants, engineers talking with other engineers, and so on. She then leaned on those champions to develop ways to keep privacy on the minds of those teams and tapped into Atlassian communication platforms. For example, in the early stages of a PIA, she wanted to capture the dialogue that was taking place (relevant to the privacy impact assessment). So they created a hashtag within their intranet. People would tag things as #PIA, and that way, they could find it quickly and pull it into a format needed for the PIA. Their engineers loved it because it was relatable to them, e.g., “I’m data-tagging, and then I’m going to use this to inform the data.” This creativity helped operationalize privacy awareness across different teams.
Being a security guy, I see a lot of parallels between PIAs and threat modeling. They both start by identifying the assets you’re looking to protect, charting out paths to potential failure, and then devising processes/defenses to avoid those failures. The drivers and purpose for each may be different; however, the approach is very similar.