21.Does DynamoDB support in-place atomic updates?
A. Yes

B. No
C. It does support in-place non-atomic updates
D. It is not defined
Answer: A
Explanation:
DynamoDB supports in-place atomic updates.
Reference:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithItems.html#Working
WithItems.AtomicCounters
22.Your manager has just given you access to multiple VPN connections that someone else has recently
set up between all your company’s offices. She needs you to make sure that the communication between
the VPNs is secure.
Which of the following services would be best for providing a low-cost hub-and-spoke model for primary
or backup connectivity between these remote offices?
A. Amazon CloudFront
B. AWS Direct Connect
C. AWS CloudHSM
D. AWS VPN CloudHub
Answer: D
Explanation:
If you have multiple VPN connections, you can provide secure communication between sites using
the AWS VPN CloudHub. The VPN CloudHub operates on a simple hub-and-spoke model that you can
use with or without a VPC. This design is suitable for customers with multiple branch offices and
existing Internet connections who would like to implement a convenient, potentially low-cost
hub-and-spoke model for primary or backup connectivity between these remote offices.
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPN_CloudHub.html
23.Amazon EC2 provides a ____.
It is an HTTP or HTTPS request that uses the HTTP verbs GET or POST.
A. web database
B. .net framework
C. Query API
D. C library
Answer: C
Explanation:
Amazon EC2 provides a Query API. These requests are HTTP or HTTPS requests that use the
HTTP verbs GET or POST and a Query parameter named Action.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/making-api-requests.html
24.In Amazon AWS, which of the following statements is true of key pairs?
A. Key pairs are used only for Amazon SDKs.
B. Key pairs are used only for Amazon EC2 and Amazon CloudFront.
C. Key pairs are used only for Elastic Load Balancing and AWS IAM.

D. Key pairs are used for all Amazon services.
Answer: B
Explanation:
Key pairs consist of a public and private key, where you use the private key to create a digital
signature, and then AWS uses the corresponding public key to validate the signature. Key pairs are used
only for Amazon EC2 and Amazon CloudFront.
Reference: http://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html
25.Does Amazon DynamoDB support both increment and decrement atomic operations?
A. Only increment, since decrement are inherently impossible with DynamoDB’s data model.
B. No, neither increment nor decrement operations.
C. Yes, both increment and decrement operations.
D. Only decrement, since increment are inherently impossible with DynamoDB’s data model.
Answer: C
Explanation:
Amazon DynamoDB supports increment and decrement atomic operations.
Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/APISummary.html
26.An organization has three separate AWS accounts, one each for development, testing, and
production. The organization wants the testing team to have access to certain AWS resources in the
production account.
How can the organization achieve this?
A. It is not possible to access resources of one account with another account.
B. Create the IAM roles with cross account access.
C. Create the IAM user in a test account, and allow it access to the production environment with the
IAM policy.
D. Create the IAM users with cross account access.
Answer: B
Explanation:
An organization has multiple AWS accounts to isolate a development environment from a testing
or production environment. At times the users from one account need to access resources in the other
account, such as promoting an update from the development environment to the production
environment. In this case the IAM role with cross account access will provide a solution. Cross account
access lets one account share access to their resources with users in the other AWS accounts.
Reference: http://media.amazonwebservices.com/AWS_Security_Best_Practices.pdf
27.You need to import several hundred megabytes of data from a local Oracle database to an Amazon
RDS DB instance.
What does AWS recommend you use to accomplish this?
A. Oracle export/import utilities
B. Oracle SQL Developer
C. Oracle Data Pump
D. DBMS_FILE_TRANSFER
Answer: C

Explanation:
How you import data into an Amazon RDS DB instance depends on the amount of data you have and
the number and variety of database objects in your database.
For example, you can use Oracle SQL Developer to import a simple, 20 MB database; you want to use
Oracle Data Pump to import complex databases or databases that are several hundred megabytes
or several terabytes in size.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.htm
l
28.A user has created an EBS volume with 1000 IOPS.
What is the average IOPS that the user will get for most of the year as per EC2 SLA if the instance is
attached to the EBS optimized instance?
A. 950
B. 990
C. 1000
D. 900
Answer: D
Explanation:
As per AWS SLA if the instance is attached to an EBS-Optimized instance, then the Provisioned
IOPS volumes are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time
in a given year. Thus, if the user has created a volume of 1000 IOPS, the user will get a minimum 900
IOPS 99.9% time of the year.
Reference: http://aws.amazon.com/ec2/faqs/
29.You need to migrate a large amount of data into the cloud that you have stored on a hard disk and
you decide that the best way to accomplish this is with AWS Import/Export and you mail the hard disk to
AWS.
Which of the following statements is incorrect in regards to AWS Import/Export?
A. It can export from Amazon S3
B. It can Import to Amazon Glacier
C. It can export from Amazon Glacier.
D. It can Import to Amazon EBS
Answer: C
Explanation:
AWS Import/Export supports:
Import to Amazon S3
Export from Amazon S3
Import to Amazon EBS
Import to Amazon Glacier
AWS Import/Export does not currently support export from Amazon EBS or Amazon Glacier.
Reference: https://docs.aws.amazon.com/AWSImportExport/latest/DG/whatisdisk.html
30.You are in the process of creating a Route 53 DNS failover to direct traffic to two EC2 zones. Obviously,
if one fails, you would like Route 53 to direct traffic to the other region. Each region has an ELB with

some instances being distributed.
What is the best way for you to configure the Route 53 health check?
A. Route 53 doesn’t support ELB with an internal health check.You need to create your own Route
53 health check of the ELB
B. Route 53 natively supports ELB with an internal health check. Turn “Evaluate target health” off
and “Associate with Health Check” on and R53 will use the ELB’s internal health check.
C. Route 53 doesn’t support ELB with an internal health check. You need to associate your
resource record set for the ELB with your own health check
D. Route 53 natively supports ELB with an internal health check. Turn “Evaluate target health” on
and “Associate with Health Check” off and R53 will use the ELB’s internal health check.
Answer: D
Explanation:
With DNS Failover, Amazon Route 53 can help detect an outage of your website and redirect your
end users to alternate locations where your application is operating properly. When you enable this
feature, Route 53 uses health checks—regularly making Internet requests to your application’s endpoints
from multiple locations around the world—to determine whether each endpoint of your application is up or
down.
To enable DNS Failover for an ELB endpoint, create an Alias record pointing to the ELB and set
the “Evaluate Target Health” parameter to true. Route 53 creates and manages the health checks for
your ELB automatically. You do not need to create your own Route 53 health check of the ELB. You also
do not need to associate your resource record set for the ELB with your own health check, because Route
53 automatically associates it with the health checks that Route 53 manages on your behalf. The ELB
health check will also inherit the health of your backend instances behind that ELB.
Reference: http://aws.amazon.com/about-aws/whats-new/2013/05/30/amazon-route-53-adds-elb-integrat
ion-for-dns-failover/
31.A user wants to use an EBS-backed Amazon EC2 instance for a temporary job. Based on the input
data, the job is most likely to finish within a week.
Which of the following steps should be followed to terminate the instance automatically once the job is
finished?
A. Configure the EC2 instance with a stop instance to terminate it.
B. Configure the EC2 instance with ELB to terminate the instance when it remains idle.
C. Configure the Cloud Watch alarm on the instance that should perform the termination action once
the instance is idle.
D. Configure the Auto Scaling schedule activity that terminates the instance after 7 days.
Answer: C
Explanation:
Auto Scaling can start and stop the instance at a pre-defined time. Here, the total running time is
unknown. Thus, the user has to use the CloudWatch alarm, which monitors the CPU utilization. The user
can create an alarm that is triggered when the average CPU utilization percentage has been lower than
10 percent for 24 hours, signaling that it is idle and no longer in use. When the utilization is below the
threshold limit, it will terminate the instance as a part of the instance action.
Reference: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/UsingAlarmActions.
html

32.Which of the following is true of Amazon EC2 security group?
A. You can modify the outbound rules for EC2-Classic.
B. You can modify the rules for a security group only if the security group controls the traffic for just
one instance.
C. You can modify the rules for a security group only when a new instance is created.
D. You can modify the rules for a security group at any time.
Answer: D
Explanation:
A security group acts as a virtual firewall that controls the traffic for one or more instances. When
you launch an instance, you associate one or more security groups with the instance. You add rules to
each security group that allow traffic to or from its associated instances. You can modify the rules for a
security group at any time; the new rules are automatically applied to all instances that are associated
with the security group.
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-network-security.html
33.An Elastic IP address (EIP) is a static IP address designed for dynamic cloud computing. With an EIP,
you can mask the failure of an instance or software by rapidly remapping the address to another instance
in your account. Your EIP is associated with your AWS account, not a particular EC2 instance, and
it remains associated with your account until you choose to explicitly release it. By default how many
EIPs is each AWS account limited to on a per region basis?
A. 1
B. 5
C. Unlimited
D. 10
Answer: B
Explanation:
By default, all AWS accounts are limited to 5 Elastic IP addresses per region for each AWS
account, because public (IPv4) Internet addresses are a scarce public resource. AWS strongly
encourages you to use an EIP primarily for load balancing use cases, and use DNS hostnames for all
other inter-node communication.
If you feel your architecture warrants additional EIPs, you would need to complete the Amazon
EC2 Elastic IP Address Request Form and give reasons as to your need for additional addresses.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html#usinginstance-addressing-limit
34.In Amazon EC2, partial instance-hours are billed _____.
A. per second used in the hour
B. per minute used
C. by combining partial segments into full hours
D. as full hours
Answer: D
Explanation:
Partial instance-hours are billed to the next hour.

Reference: http://aws.amazon.com/ec2/faqs/
35.In EC2, what happens to the data in an instance store if an instance reboots (either intentionally
or unintentionally)?
A. Data is deleted from the instance store for security reasons.
B. Data persists in the instance store.
C. Data is partially present in the instance store.
D. Data in the instance store will be lost.
Answer: B
Explanation:
The data in an instance store persists only during the lifetime of its associated instance. If an
instance reboots (intentionally or unintentionally), data in the instance store persists.
However, data on instance store volumes is lost under the following circumstances.
Failure of an underlying drive
Stopping an Amazon EBS-backed instance
Terminating an instance
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/InstanceStorage.html
36.You are setting up a VPC and you need to set up a public subnet within that VPC.
Which following requirement must be met for this subnet to be considered a public subnet?
A. Subnet’s traffic is not routed to an internet gateway but has its traffic routed to a virtual private gateway.
B. Subnet’s traffic is routed to an internet gateway.
C. Subnet’s traffic is not routed to an internet gateway.
D. None of these answers can be considered a public subnet.
Answer: B
Explanation:
A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically
isolated from other virtual networks in the AWS cloud. You can launch your AWS resources, such as
Amazon EC2 instances, into your VPC. You can configure your VPC: you can select its IP address range,
create subnets, and configure route tables, network gateways, and security settings.
A subnet is a range of IP addresses in your VPC. You can launch AWS resources into a subnet that
you select. Use a public subnet for resources that must be connected to the internet, and a private subnet
for resources that won’t be connected to the Internet.
If a subnet’s traffic is routed to an internet gateway, the subnet is known as a public subnet.
If a subnet doesn’t have a route to the internet gateway, the subnet is known as a private subnet.
If a subnet doesn’t have a route to the internet gateway, but has its traffic routed to a virtual
private gateway, the subnet is known as a VPN-only subnet.
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html
37.Can you specify the security group that you created for a VPC when you launch an instance
in EC2-Classic?
A. No, you can specify the security group created for EC2-Classic when you launch a VPC instance.
B. No
C. Yes

D. No, you can specify the security group created for EC2-Classic to a non-VPC based instance only.
Answer: B
Explanation:
If you’re using EC2-Classic, you must use security groups created specifically for EC2-Classic. When
you launch an instance in EC2-Classic, you must specify a security group in the same region as the
instance. You can’t specify a security group that you created for a VPC when you launch an instance
in EC2-Classic.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#ec2-cla
ssic-security-groups
38.While using the EC2 GET requests as URLs, the _____ is the URL that serves as the entry point for
the web service.
A. token
B. endpoint
C. action
D. None of these
Answer: B
Explanation:
The endpoint is the URL that serves as the entry point for the web service.
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-query-api.html
39.You have been asked to build a database warehouse using Amazon Redshift. You know a little about
it, including that it is a SQL data warehouse solution, and uses industry standard ODBC and
JDBCconnections and PostgreSQL drivers.
However you are not sure about what sort of storage it uses for database tables.
What sort of storage does Amazon Redshift use for database tables?
A. InnoDB Tables
B. NDB data storage
C. Columnar data storage
D. NDB CLUSTER Storage
Answer: C
Explanation:
Amazon Redshift achieves efficient storage and optimum query performance through a combination
of massively parallel processing, columnar data storage, and very efficient, targeted data
compression encoding schemes.
Columnar storage for database tables is an important factor in optimizing analytic query
performance because it drastically reduces the overall disk I/O requirements and reduces the amount of
data you need to load from disk.
Reference: http://docs.aws.amazon.com/redshift/latest/dg/c_columnar_storage_disk_mem_mgmnt.html
40.You are checking the workload on some of your General Purpose (SSD) and Provisioned
IOPS (SSD) volumes and it seems that the I/O latency is higher than you require. You should probably
check the _____________ to make sure that your application is not trying to drive more IOPS than you
have provisioned.

A. Amount of IOPS that are available
B. Acknowledgement from the storage subsystem
C. Average queue length
D. Time it takes for the I/O operation to complete
Answer: C
Explanation:
In EBS workload demand plays an important role in getting the most out of your General Purpose
(SSD) and Provisioned IOPS (SSD) volumes. In order for your volumes to deliver the amount of IOPS that
are available, they need to have enough I/O requests sent to them. There is a relationship between
the demand on the volumes, the amount of IOPS that are available to them, and the latency of the
request (the amount of time it takes for the I/O operation to complete).
Latency is the true end-to-end client time of an I/O operation; in other words, when the client sends a
IO, how long does it take to get an acknowledgement from the storage subsystem that the IO read or write
is complete.
If your I/O latency is higher than you require, check your average queue length to make sure that
your application is not trying to drive more IOPS than you have provisioned. You can maintain high IOPS
while keeping latency down by maintaining a low average queue length (which is achieved by
provisioning more IOPS for your volume).
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-workload-demand.html

发表评论

电子邮件地址不会被公开。