2019-AWS-SAA-C01练习题 251-300

AWS Certified Solutions Architect–Associate C01 Test 2019 «««  click here 1000+ Questions

Q251. A company is moving to AWS. Management has identified a set of approved AWS services that meet all deployment requirements. The company would like to restrict access to all other unapproved services to which employees would have access. Which solution meets these requirements with the LEAST amount of operational overhead?




 

  1. Configure the AWS Trusted Advisor service utilization compliance report.

Subscribe to Amazon SNS notifications from Trusted Advisor.

Create a custom AWS Lambda function that can automatically remediate the use of unauthorized services.

  1. Use AWS Config to evaluate the configuration settings of AWS resources.

Subscribe to Amazon SNS notifications from AWS Config.

Create a custom AWS Lambda function that can automatically remediate the use of unauthorized services.

  1. Configure AWS Organizations.

Create an organizational unit (OU) and place all AWS accounts into the OU.

Apply a service control policy (SCP) to the OU that denies the use of certain services.

  1. Create a custom AWS IAM policy.

Deploy the policy to each account using AWS CloudFormation StackSets.

Include deny statements in the policy to restrict the use of certain services.

Attach the policies to all IAM users in each account.

Answer: A

 

Q252. A customer is running a critical payroll system in a production environment in one data center and a disaster recovery (DR) environment in another. The application includes load-balanced web servers and failover for the MySQL database. The customer’s DR process is manual and error- phone. For this reason, management has asked IT to migrate the application to AWS and make it highly available so that IT no longer has to manually fail over the environment.

 

How should a Solutions Architect migrate the system to AWS?

 

  1. Migrate the production and DR environments to different Availability Zones within the same region. Let AWS manage failover between the environments.
  2. Migrate the production and DR environments to different regions. Let AWS manage failover between the environments.
  3. Migrate the production environment to a single Availability Zone, and set up instance recovery for Amazon EC2. Decommission the DR environment because it is no longer needed.
  4. Migrate the production environment to span multiple Availability Zones, using Elastic Load Balancing and Multi-AZ Amazon RDS. Decommission the DR environment because it is no longer needed.

 

Answer: B

 

Q253. A company is creating a web application that will run on an Amazon EC2 instance. The application on the instance needs access to an Amazon DynamoDB table for storage.

 

What should be done to meet these requirements?

 

  1. Create another AWS account root user with permissions to the DynamoDB table.
  2. Create an IAM role and assign the role to the EC2 instance with permissions to the DynamoDB table.
  3. Create an identity provider and assign the identity provider to the EC2 instance with permissions to the DynamoDB table.
  4. Create identity federation with permissions to the DynamoDB table.

 

Answer: B

 



Q254. A company is creating a web application that allows customers to view photos in their web browsers. The website is hosted in us-east-1 on Amazon EC2 instances behind an Application Load Balancer. Users will be located in many places around the world.

 

Which solution should provide all users with the fastest photo viewing experience?

 

  1. Implement an AWS Auto Scaling group for the web server instances behind the Application Load Balancer.
  2. Enable Amazon CloudFront for the website and specify the Application Load Balancer as the origin.
  3. Move the photos into an Amazon S3 bucket and enable static website hosting.
  4. Enable Amazon ElastiCache in the web server subnet.

 

Answer: A

 

Explanation:

http://jayendrapatil.com/tag/elb/

Q255. A Solutions Architect is designing a highly available web application on AWS. The data served on the website is dynamic and is pulled from Amazon DynamoDB. All users are geographically close to one another.

 

How can the Solutions Architect make the application highly available?

 

  1. Host the website data on Amazon S3 and set permissions to enable public read-only access for users.
  2. Host the web server data on Amazon CloudFront and update the objects in the Cloudfront distribution when they change.
  3. Host the application on EC2 instances across multiple Availability Zones. Use an Auto Scaling group coupled with an Application Load Balancer.
  4. Host the application on EC2 instances in a single Availability Zone. Replicate the EC2 instances to a separate region, and use an Application Load Balancer for high availability.

 

Answer: C

 

Explanation:

https://aws.amazon.com/cn/blogs/aws/amazon-rds-multi-az-deployment/

Q256. A company is migrating on-premises databases to AWS. The company’s backend application produces a large amount of database queries for reporting purposes, and the company wants to offload some of those reads to Read Replica, allowing the primary database to continue performing efficiently.

 

Which AWS database platforms will accomplish this? (Select TWO.)

 

  1. Amazon RDS for Oracle
  2. Amazon RDS for PostgreSQL
  3. Amazon RDS for MariaDB
  4. Amazon DynamoDB
  5. Amazon RDS for Microsoft SQL Server

 

Answer: BC

 




Explanation:

AWS RDS Replication – Multi-AZ & Read Replica – Certification

Q257. An application launched on Amazon EC2 instances needs to publish personally identifiable information (PII) about customers using Amazon SNS. The application is launched in private subnets within an Amazon VPC.

 

Which is the MOST secure way to allow the application to access service endpoints in the same region?

 

  1. Use an internet gateway.
  2. Use AWS PrivateLink.
  3. Use a NAT gateway.
  4. Use a proxy instance.

 

Answer: B

 

Q258. A data-processing application runs on an i3.large EC2 instance with a single 100 GB EBS gp2 volume. The application stores temporary data in a small database (less than 30 GB) located on the EBS root volume. The application is struggling to process the data fast enough, and a Solutions Architect has determined that the I/O speed of the temporary database is the bottleneck.

 

What is the MOST cost-efficient way to improve the database response times?

 

  1. Enable EBS optimization on the instance and keep the temporary files on the existing volume.
  2. Put the temporary database on a new 50-GB EBS gp2 volume.
  3. Move the temporary database onto instance storage.
  4. Put the temporary database on a new 50-GB EBS io1 volume with a 3-K IOPS provision.

 

Answer: D

 

Explanation:

io1 volumes, or Provisioned IOPS (PIOPS) SSDs, are best for: Critical business applications that require sustained IOPS performance, or more than 10,000 IOPS or 160 MiB/s of throughput per volume, like large database workloads



Q259. An application stores data in an Amazon RDS PostgreSQL Multi-AZ database instance. The ratio of read requests to write requests is about 2 to 1. Recent increases in traffic are causing very high latency.

 

How can this problem be corrected?

 

  1. Create a similar RDS PostgreSQL instance and direct all traffic to it.
  2. Use the secondary instance of the Multiple Availability Zone for read traffic only.
  3. Create a read replica and send half of all traffic to it.
  4. Create a read replica and send all read traffic to it.

 

Answer: C

 

Explanation:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

Q260. A Solutions Architect is designing a system that will store Personally Identifiable Information (PII) in an Amazon S3 bucket. Due to compliance and regulatory requirements, both the master keys and unencrypted data should never be sent to AWS.

 

What Amazon S3 encryption technique should the Architect choose?

 

  1. Amazon S3 client-side encryption with an AWS KMS-managed customer master key (CMK)
  2. Amazon S3 server-side encryption with an AWS KMS-managed key
  3. Amazon S3 client-side encryption with a client-side master key
  4. Amazon S3 server-side encryption with a customer-provided key

 

Answer: A

 

Explanation:

https://aws.amazon.com/cn/blogs/china/new-amazon-s3-encryption-security-features/

Q261. A Security team reviewed their company’s VPC Flow Logs and found that traffic is being directed to the internet. The application in the VPC uses Amazon EC2 instances for compute and Amazon S3 for storage. The company’s goal is to eliminate internet access and allow the application to continue to function.

 

What change should be made in the VPC before updating the route table?

 

  1. Create a NAT gateway for Amazon S3 access
  2. Create a VPC endpoint for Amazon S3 access
  3. Create a VPC endpoint for Amazon EC2 access
  4. Create a NAT gateway for Amazon EC2 access

 

Answer: D

 

Q262. A company is deploying a reporting application on Amazon EC2. The application is expected to generate 1,000 documents every hour and each document will be 800 MB. The company is concerned about strong data consistency and file locking, as various applications hosted on other EC2 instances will process the report documents in parallel when they become available. What storage solution will meet these requirements with the LEAST amount of administrative overhead?

 

  1. Amazon EFS
  2. Amazon S3
  3. Amazon ElastiCache
  4. Amazon EBS

 

Answer: A

 

Explanation:

https://aws.amazon.com/efs/

Q263. A Solutions Architect is building a WordPress-based web application hosted on AWS using Amazon EC2. This application serves as a blog for an international internet security company. The application must be geographically redundant and scalable. It must separate the public Amazon EC2 web servers from the private Amazon RDS database, it must be highly available, and it must support dynamic port routing. Which combination of AWS services or capabilities will meet these requirements?

 

  1. AWS Auto Scaling with a Classic Load Balancer, and AWS CloudTrail
  2. Amazon Route 53, Auto Scaling with an Application Load Balancer, and Amazon CloudFront
  3. A VPC, a NAT gateway and Auto Scaling with a Network Load Balancer
  4. CloudFront, Route 53, and Auto Scaling with a Classic Load Balancer

 

Answer: A

 

Q264. An e-commerce application places orders in an Amazon SQS queue. When a message is received, Amazon EC2 worker instances process the request. The EC2 instances are in an Auto Scaling group.

 

How should the architecture be designed to scale up and down with the LEAST amount of operational overhead?

 

  1. Use an Amazon CloudWatch alarm on the EC2 CPU to scale the Auto Scaling group up and down.
  2. Use an EC2 Auto Scaling health check for messages processed on the EC2 instances to scale up and down.
  3. Use an Amazon CloudWatch alarm based on the number of visible messages to scale the Auto Scaling group up or down.
  4. Use an Amazon CloudWatch alarm based on the CPU to scale the Auto Scaling group up or down.

 

Answer: B

 

Q265. A customer is migrating to AWS and requires applications to access Network File System shares without code changes. Data is critical and accessed frequently.

 

Which storage solution should a Solutions Architect recommend to maximize availability and durability?

 

  1. Amazon EBS
  2. Amazon S3
  3. AWS Storage Gateway for files
  4. Amazon EFS

 

Answer: B

 

Explanation:

https://aws.amazon.com/storagegateway/faqs/

Q266. A company has many applications on Amazon EC2 instances running in Auto Scaling groups. Company policies require that data on the attached Amazon EBS volume must be retained.

 

Which actions will meet this requirement without impacting performance?

 

  1. Enable Termination Protection on the Amazon EC2 instances.
  2. Disable DeleteOnTermination for the Amazon EBS volumes.
  3. Use Amazon EC2 user data to set up a synchronization job for root volume data.
  4. Change the auto scaling Health Check to point to a source on the root volume.

 

Answer: B

 

Explanation:

https://aws.amazon.com/ec2/faqs/#Spot_instances

Q267. A company wants to expand its web services from us-east-1 into ap-southeast-1. The company stores a large amount of static content on its website, and recently received complaints about slow loading speeds and the website timing out.

 

What should be done to meet the expansion goal while also addressing the latency and timeout issues?

 

  1. Store the static content in Amazon S3 and enable S3 Transfer Acceleration.
  2. Store the static content in an Amazon EBS volume in the ap-southeast-1 region and provision larger Amazon EC2 instances for the website.
  3. Use an Amazon Route 53 simple routing policy to distribute cached content across three regions.
  4. Use Amazon S3 to store the static content and configure an Amazon CloudFront distribution.

 

Answer: D

 

Q268. An application is scanning an Amazon DynamoDB table that was created with default settings. The application occasionally reads stale data when it queries the table.

 

How can this issue be corrected?

 

  1. Increase the provisioned read capacity of the table.
  2. Enable AutoScaling on the DynamoDB table.
  3. Update the application to use strongly consistent reads.
  4. Re-create the DynamoDB table with eventual consistency disabled.

 

Answer: C

 

Explanation:

https://www.javacodegeeks.com/2017/10/amazon-dynamodb-tutorial.html

Q269. A company is setting up a new website for online sales. The company will have a web tier and a database tier. The web tier consists of load-balanced, auto-scaled Amazon EC2 instances in multiple Availability Zones (AZs). The database tier is an Amazon RDS Multi-AZ deployment. The EC2 instances must connect securely to the database.

 

How should the resources be launched?

 

  1. EC2 instances: public subnet

RDS database instances: public subnet

Load balancer: public subnet

  1. EC2 instances: public subnet

RDS database instances: private subnet

Load balancer: private subnet

  1. EC2 instances: private subnet

RDS database instances: public subnet

Load balancer: public subnet

  1. EC2 instances: private subnet

RDS database instances: private subnet

Load balancer: public subnet

Answer: B

 

Q270. A customer set up an Amazon VPC with one private subnet and one public subnet with a NAT gateway. The VPC will contain a group of Amazon EC2 instances. All instances will configure themselves at startup by downloading a bootstrap script from an Amazon S3 bucket with a policy that only allows access from the customer’s Amazon EC2 instances and then deploys an application through GIT. A Solutions Architect has been asked to design a solution that provides the highest level of security regarding network connectivity to the Amazon EC2 instances.

 

How should the Architect design the infrastructure?

 

  1. Place the Amazon EC2 instances in the public subnet, with no EIPs; route outgoing traffic through the internet gateway.
  2. Place the Amazon EC2 instances in a public subnet, and assign EIPs; route outgoing traffic through the NAT gateway.
  3. Place the Amazon EC2 instances in a private subnet, and assign EIPs; route outgoing traffic through the internet gateway.
  4. Place the Amazon EC2 instances in a private subnet, with no EIPs; route outgoing traffic through the NAT gateway

 

Answer: B

 

Q271. A company processed 10 TB of raw data to generate quarterly reports. Although it is unlikely to be used again, the raw data needs to be preserved for compliance and auditing purposes.

 

What is the MOST cost-effective way to store the data in AWS?

 

  1. Amazon EBS Cold HDD (sc1)
  2. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
  3. Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
  4. Amazon Glacier

 

Answer: C

 

Q272. A Solutions Architect needs to design a solution that will allow Website Developers to deploy static web content without managing server infrastructure. All web content must be accessed over HTTPS with a custom domain name. The solution should be scalable as the company continues to grow.

 

Which of the following will provide the MOST cost-effective solution?

 

  1. Amazon EC2 instance with Amazon EBS
  2. AWS Lambda function with Amazon API Gateway
  3. Amazon CloudFront with an Amazon S3 bucket origin
  4. Amazon S3 with a static website

 




Answer: C

 

Q273. A company is running a series of national TV campaigns. These 30-second advertisements will introduce sudden traffic peaks targeted at a Node.js application. The company expects traffic to increase from five requests each minute to more than 5,000 requests each minute.

 

Which AWS service should a Solutions Architect use to ensure traffic surges can be handled?

 

  1. AWS Lambda
  2. Amazon ElastiCache
  3. Size EC2 instances to handle peak load
  4. An Auto Scaling group for EC2 instances

 

Answer: D

 

Explanation:

https://aws.amazon.com/blogs/aws/how-aws-powered-amazons-biggest-day-ever/

Q274. An insurance company stores all documents related to annual policies for the duration of the policies. The documents are created once and then stored until they are required, typically at the end of the policy. A document must be capable of being retrieved immediately. The company is now moving their document management to the AWS Cloud.

 

Which service should a Solutions Architect recommend as a cost-effective solution that meets the company’s requirements?

 

  1. Amazon RDS MySQL
  2. Amazon S3 Standard-Infrequent Access
  3. Amazon Glacier
  4. Amazon S3 Standard

 

Answer: B

 

Q275. How can a user track memory usage in an EC2 instance?

 

  1. Call Amazon CloudWatch to retrieve the memory usage metric data that exists for the EC2 instance.
  2. Assign an IAM role to the EC2 instance with an IAM policy granting access to the desired metric.
  3. Use an instance type that supports memory usage reporting to a metric by default.
  4. Place an agent on the EC2 instance to push memory usage to an Amazon CloudWatch custom metric.

 

Answer: D

 

Explanation:

https://www.quora.com/How-can-I-monitor-memory-usage-on-Amazon-EC2

Q276. A Solutions Architect must design a storage solution for incoming billing reports in CSV format. The data does not need to be scanned frequently and is discarded after 30 days.

 

Which service will be MOST cost-effective in meeting these requirements?

 

  1. Import the logs into an RDS MySQL instance.
  2. Use AWS Data Pipeline to import the logs into a DynamoDB table.
  3. Write the files to an S3 bucket and use Amazon Athena to query the data.
  4. Import the logs to an Amazon Redshift cluster

 

Answer: C

 

Explanation:

https://aws.amazon.com/cn/athena/

Q277. A Solutions Architect needs to deploy an HTTP/HTTPS service on Amazon EC2 instances with support for WebSockets using load balancers.

 

How can the Architect meet these requirements?

 

  1. Configure a Network Load Balancer.
  2. Configure an Application Load Balancer.
  3. Configure a Classic Load Balancer.
  4. Configure a Layer-4 Load Balancer.

 

Answer: B

 

Explanation:

The Application Load Balancer is designed to handle streaming, real-time, and WebSocket workloads in an optimized fashion. Instead of buffering requests and responses, it handles them in streaming fashion.

This reduces latency and increases the perceived performance of your application. Reference: https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/

Q278. A Solution Architect is designing a web application that runs on Amazon EC2 instances behind a load balancer. All data in transit must be encrypted.

 

Which solutions will meet the encryption requirement? (Select TWO.)

 

  1. Use an Application Load Balancer (ALB) in passthrough mode, then terminate SSL on EC2 instances.
  2. Use an Application Load Balancer (ALB) with a TCP listener, then terminate SSL on EC2 instances.
  3. Use a Network Load Balancer (NLB) with a TCP listener, then terminate SSL on EC2 instances.
  4. Use an Application Load Balancer (ALB) with an HTTPS listener, then install SSL certificates on the ALB and EC2 instances.
  5. Use a Network Load Balancer (NLB) with an HTTPS listener, then install SSL certificates on the NLB and EC2 instances.

 

Answer: C

 

Q279. A user is designing a new service that receives location updates from 3,600 rental cars every hour. The cars upload their location to an Amazon S3 bucket. Each location must be checked for distance from the original rental location.

 

Which services will process the updates and automatically scale?

 

  1. Amazon EC2 and Amazon EBS
  2. Amazon Kinesis Firehouse and Amazon S3
  3. Amazon ECS and Amazon RDS
  4. Amazon S3 events and AWS Lambda

 

Answer: A

 

Q280. A company is writing a new service running on Amazon EC2 that must create thumbnail images of thousands of images in a large archive. The system will write scratch data to storage during the process.

 

Which storage service is best suited for this scenario?

 

  1. EC2 instance store
  2. Amazon EFS
  3. Amazon CloudSearch
  4. Amazon EBS Throughput Optimized HDD (st1)

 

Answer: D

 

Q281. A company’s Amazon RDS MySQL DB instance may be rebooted for maintenance and to apply patches. This database is critical and potential user disruption must be minimized.

 

What should the Solution Architect do in this scenario?

 

  1. Set up an RDS MySQL cluster
  2. Create an RDS MySQL Read Replica.
  3. Set RDS MySQL to Multi-AZ.
  4. Create an Amazon EC2 instance MySQL cluster.

 

Answer: D

 

Explanation:

https://docs.aws.amazon.com/zh_cn/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

Q282. A retail company operates an e-commerce environment that runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group. Images are hosted in an Amazon S3 bucket using a custom domain name. During a flash sale with 10,000 simultaneous users, some images on the website are not loading. What should be done to resolve the performance issue?

 

  1. Move the images to the EC2 instances in the Auto Scaling group.
  2. Enable Transfer Acceleration for the S3 bucket.
  3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin.
  4. Increase the number of minimum, desired, and maximum EC2 instances in the Auto Scaling group.

 

Answer: D

 

Q283. A solutions Architect is designing a new workload where an AWS Lambda function will access an Amazon DynamoDB table.

 

What is the MOST secure means of granting the Lambda function access to the DynamoDB table?

 

  1. Create an identity and access management (IAM) role with the necessary permissions to access the DynamoDB table, and assign the role to the Lambda function.
  2. Create a DynamoDB user name and password and give them to the Developer to use in the Lambda function.
  3. Create an identity and access management (IAM) user, and create access and secret keys for the user. Give the user the necessary permissions to access the DynamoDB table. Have the Developer use these keys to access the resources.
  4. Create an identity and access management (IAM) role allowing access from AWS Lambda and assign the role to the DynamoDB table.

 

Answer: A

 

Explanation:

https://aws.amazon.com/blogs/security/how-to-create-an-aws-iam-policy-to-grant-aws-lambda- access-to-an-amazon-dynamodb-table/



Q284. A web application runs on Amazon EC2 instances behind an ELB Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones. Every night, the Auto Scaling group doubles in size. Traffic analysis shows that users in a particular region are requesting the same static content stored locally on the EC2 instances.

 

How can a Solutions Architect reduces the need to scale and improve application performance for the users?

 

  1. Re-deploy the application in a new VPC that is closer to the users making the requests.
  2. Create an Amazon CloudFront distribution for the site and redirect user traffic to the distribution.
  3. Store the contents on Amazon EFS instead of the EC2 root volume.
  4. Implement Amazon Redshift to create a repository of the content closer to the users.

 

Answer: B

 

Q285. A Solutions Architect is designing an application that will run on Amazon ECS behind an Application Load Balancer (ALB). For security reasons, the Amazon EC2 host instances for the ECS cluster are in a private subnet.

What should be done to ensure that the incoming traffic to the host instances is from the ALB only?

 

  1. Create network ACL rules for the private subnet to allow incoming traffic on ports 32768 through 61000 from the IP address of the ALB only.
  2. Update the EC2 cluster security group to allow incoming access from the IP address of the ALB only.
  3. Modify the security group used by the EC2 cluster to allow incoming traffic from the security group used by the ALB only.
  4. Enable AWS WAF on the ALB and enable the ECS rule.

 

Answer: B

 

Q286. A company wants to improve latency by hosting images within a public Amazon S3 bucket fronted by an Amazon CloudFront distribution. The company wants to restrict access to the S3 bucket to include the CloudFront distribution only, while also allowing CloudFront to continue proper functionality.

 

What should be done after making the bucket private to restrict access with the LEAST operational overhead?

 

  1. Create a CloudFront origin access identity and create a security group that allows access from CloudFront.
  2. Create a CloudFront origin access identity and update the bucket policy to grant access to it.
  3. Create a bucket policy restricting all access to the bucket to include CloudFront IPs only.
  4. Enable the CloudFront option to restrict viewer access and update the bucket policy to allow the distribution.

 

Answer: D

 

Explanation:

https://medium.com/tensult/creating-aws-cloudfront-distribution-with-s3-origin-ee47b8122727

Q287. A Solutions Architect is designing a new architecture that will use an Amazon EC2 Auto Scaling group.

 

Which of the following factors determine the health check grace period? (Select TWO.)

 

  1. How frequently the Auto Scaling group scales up or down.
  2. How many Amazon CloudWatch alarms are configured for status checks.
  3. How much of the application code is embedded in the AMI.
  4. How long it takes for the Auto Scaling group to detect a failure.
  5. How long the bootstrap script takes to run.

 

Answer: AD

 

Q288. A company plans to deploy a new application in AWS that reads and writes information to a database. The company wants to deploy the application in two different AWS Regions in an active-active configuration. The databases need to replicate to keep information in sync.

 

What should be used to meet these requirements?

 

  1. Amazon Athena with Amazon S3 cross-region replication
  2. AWS Database Migration Service with change data capture
  3. Amazon DynamoDB with global tables
  4. Amazon RDS for PostgreSQL with a cross-region Read Replica

 

Answer: D

 

Q289. A company is developing a data lake solution in Amazon S3 to analyze large-scale datasets. The solution makes infrequent SQL queries only. In addition, the company wants to minimize infrastructure costs.

 

Which AWS service should be used to meet these requirements?

 

  1. Amazon Athena
  2. Amazon Redshift Spectrum
  3. Amazon RDS for PostgreSQL
  4. Amazon Aurora

 




Answer: B

 

Explanation:

https://docs.aws.amazon.com/aws-technical-content/latest/building-data-lakes/in-place- querying.html

Q290. A company needs to store data for 5 years. The company will need to have immediate and highly available access to the data at any point in time, but will not require frequent access.

 

What lifecycle action should be taked to meet the requirements while reducing costs?

 

  1. Transition objects from Amazon S3 Standard to Amazon S3 Standard-Infrequent Access (S3 Standard- IA)
  2. Transition objects to expire after 5 years.
  3. Transition objects from Amazon S3 Standard to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
  4. Transition objects from Amazon S3 Standard to the GLACIER storage class.

 

Answer: D

 

Explanation:

https://aws.amazon.com/cn/s3/storage-classes/

S3 One Zone-IA is for data that is accessed less frequently, but requires rapid access when needed.S3 One Zone-IA storage redundantly stores data within that single Availability Zone to deliver storage at 20% less cost than geographically redundant S3 Standard-IA storage



Q291. A company wants to create an application that will transmit protected health information (PHI) to thousands of service consumers in different AWS accounts. The application servers will sit in private VPC subnets. The routing for the application must be fault tolerant.

 

What should be done to meet these requirements?

 

  1. Create a VPC endpoint service and grant permissions to specific service consumers to create a connection.
  2. Create a virtual private gateway connection between each pair of service provider VPCs and service consumer VPCs.
  3. Create an internal Application Load Balancer in the service provider VPC and put application servers behind it.
  4. Create a proxy server in the service provider VPC to route requests from service consumers to the application servers.

 

Answer: A

 

Q292. A company hosts a website using Amazon API Gateway on the front end. Recently, there has been heavy traffic on the website and the company wants to control access by allowing authenticated traffic only.

 

How should the company limit access to authenticated users only? (Select TWO.)

 

  1. Allow users that are authenticated through Amazon Cognito.
  2. Limit traffic through API Gateway.
  3. Allow X.509 certificates to authenticate traffic.
  4. Deploy AWS KMS to identify users.
  5. Assign permissions in AWS IAM to allow users.

 

Answer: AE

 

Q293. A company needs to use AWS resources to expand capacity for a website hosted in an on- premises data center. The AWS resources will include load balancers, Auto Scaling, and Amazon EC2 instances that will access an on-premises database. Network connectivity has been established, but no traffic is going to the AWS environment.

 

How should Amazon Route 53 be configured to distribute load to the AWS environment? (Select TWO.)

 

  1. Set up a weighted routing policy, distributing the workload between the load balancer and the on- premises environment.
  2. Set up an A record to point the DNS name to the IP address of the load balancer.
  3. Create multiple A records for the EC2 instances.
  4. Set up a geolocation routing policy to distribute the workload between the load balancer and the on- premises environment.
  5. Set up a routing policy for failover using the on-premises environment as primary and the load balancer as secondary.

 

Answer: AB

 

Q294. A Solutions Architect is reviewing an application that writes data to an Amazon DynamoDB table on a daily basis Random table roads occur many times per second. The company needs to allow thousands of low latency roods and avoid any negative impact to the rest of the application.

What should the Solutions Architect do to meets the company’s goals?

 

  1. A Use DynamoDB Accelerator to cache reads
  2. DynamoDB write capacity units
  3. Add Amazon SQS to decouple requests
  4. Implement Amazon Kinesis to decouple requests

 

Answer: B

 

Q295. A company is launching a dynamic website and the Operations team expects up to 10 times the traffic on the launch date. This website is hosted on Amazon EC2 instances and traffic is distributed by Amazon Route 53. A Solutions Architect must ensure that there is enough backend capacity to meet user demands. The Operations team wants to scale down as quickly as possible after the launch.

What is the MOST cost-effective and fault-tolerant solution that will meet the company’s customer demands? (Select TWO)

 

  1. Set up an Application Load Balancer to distribute traffic to multiple EC2 instances
  2. Set up an Auto Scaling group across multiple Availability Zones for the website, and create scale- out and scale-in policies
  3. Create an Amazon CloudWatch alarm to send an email through Amazon SNS when EC2 instances experience higher loads
  4. Create an AWS Lambda function to monitor website load time, run it every 5 minutes, and use the AWS SDK to create a new instance if website load time is longer than 2 seconds
  5. Use Amazon CloudFront to cache the website content during launch, and set a TTL for cache content to expire after the launch date

 

Answer: AB

 

Q296. A Solutions Architect is considering possible options for improving the security of the data stored on an Amazon EBS volume attached to on Amazon EC2 instance. Which solution will improve the security of the data?

 

  1. Use AWS KMS to encrypt the EBS volume
  2. Create an I AM policy that restricts read and write access to the volume
  3. Migrate the sensitive data to an instance store volume
  4. Use Amazon single sign-on to control login access to the EC2 instance

 

Answer: A

 

Q297. A Solutions Architect is designing an application in AWS The Architect must not expose the application or database tier over the Internet for security reasons The application must be low- cost and have a scalable front end The databases and application tier must have only one-way Internet access to download software and patch updates.

Which solution helps to meet these requirements?

 

  1. Use a NAT Gateway as the front end for the application tier and to enable the private resources to have Internet access
  2. Use an Amazon EC2-based proxy server as the front end for the application tier and a NAT Gateway to allow Internet access for private resources
  3. Use an ELB Classic Load Balancer as the front end for the application tier, and an Amazon EC2 proxy server to allow Internet access for private resources
  4. Use an ELB Classic Load Balancer as the front end for the application tier, and a NAT Gateway to allow Internet access for private resources

 

Answer: D

 

Q298. A company is designing a new application to collect data on user behavior tor analysis at a later time. Amazon Kinesis Data Streams will be used to receive user interaction events. What should be done to ensure the event data is retained indefinitely?

 

  1. Configure the stream to write records to an attached Amazon EBS volume
  2. Configure an Amazon Kinesis Data Firehose delivery stream to store data on Amazon S3
  3. Configure the stream data retention period to retain the data indefinitely
  4. Configure an Amazon EC2 consumer to read from the data stream and store records in Amazon SQS

 

Answer: B

 



Q299. An application server needs to be in an private subnet without access to the Internet. The solution must retrieve and upload Amazon S3 bucket.

How should a Solutions Architect design a solution to meet these requirements?

 

  1. Use Amazon S3 VPC endpoints
  2. Deploy a proxy server
  3. Use a NAT Gateway
  4. Use a private Amazon S3 bucket

 

Answer: A

Q300. A photo-sharing website running on AWS allows users to generate thumbnail images of photos stored in Amazon S3. An amazon DynamoDB Table maintains the locations of photos and thumbnails are easily re- created from the originals it they are accidentally How should the thumbnail images be stored to ensure the LOWEST cost?

 

  1. Amazon S3 Standard-Infrequent Access (S3 Standard-IA) with cross-region replication
  2. Amazon S3
  3. Amazon Glacier
  4. Amazon S3 with cross-region replication

 

Answer: A




2 个评论

  1. What is the MOST cost-efficient way to improve the database response times?

    A data-processing application runs on an i3.large EC2 instance with a single 100 GB EBS gp2 volume. The application stores temporary data in a small database (less than 30 GB) located on the EBS root volume. The application is struggling to process the data fast enough, and a Solutions Architect has determined that the I/O speed of the temporary database is the bottleneck.
    What is the MOST cost-efficient way to improve the database response times?
    A. Enable EBS optimization on the instance and keep the temporary files on the existing volume.
    B. Put the temporary database on a new 50-GB EBS gp2 volume.
    C. Move the temporary database onto instance storage.
    D. Put the temporary database on a new 50-GB EBS io1 volume with a 3-K IOPS provision

    Why not C. It’s temporary data.???.

    匿名
  2. Q294. A Solutions Architect is reviewing an application that writes data to an Amazon DynamoDB table on a daily basis Random table roads occur many times per second. The company needs to allow thousands of low latency roods and avoid any negative impact to the rest of the application.

    What should the Solutions Architect do to meets the company’s goals?

    Why the answer is: B
    DynamoDB write capacity units

    The questions is: The company needs to allow thousands of low-latency reads.
    I think the answer is: A. Use DynamoDB Accelerator to cache reads.

    ThaiNT

发表评论

电子邮件地址不会被公开。