For this question, refer to the Helicopter Racing League (HRL) case study. Your team is in charge of creating a
payment card data vault for card numbers used to bill tens of thousands of viewers, merchandise consumers,
and season ticket holders. You need to implement a custom card tokenization service that meets the following
requirements:
• It must provide low latency at minimal cost.
• It must be able to identify duplicate credit cards and must not store plaintext card numbers.
• It should support annual key rotation.
Which storage approach should you adopt for your tokenization service?
For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team
releases a new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to a
repository. The security team at HRL has developed an in-house penetration test Cloud Function called Airwolf.
The security team wants to run Airwolf against the predictive capability application as soon as it is released
every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should you do?
For this question, refer to the Helicopter Racing League (HRL) case study. Recently HRL started a new regional
racing league in Cape Town, South Africa. In an effort to give customers in Cape Town a better user
experience, HRL has partnered with the Content Delivery Network provider, Fastly. HRL needs to allow traffic
coming from all of the Fastly IP address ranges into their Virtual Private Cloud network (VPC network). You are
a member of the HRL security team and you need to configure the update that will allow only the Fastly IP
address ranges through the External HTTP(S) load balancer. Which command should you use?
For this question, refer to the TerramEarth case study.
TerramEarth's 20 million vehicles are scattered around the world. Based on the vehicle's location its telemetry data is stored in a Google Cloud Storage (GCS) regional bucket (US. Europe, or Asia). The CTO has asked you to run a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles. You want to run this job on all the data. What is the most cost-effective way to run this job?