Editorial Reviews. About the Author. ZOINER TEJADA has more than 18 years of experience in the software industry as a software architect, CTO, and start-up. Exam Ref Developing Microsoft Azure Solutions, Second Edition. [ Michele Bustamante Edition/Format: eBook: Document. Rating: (not yet rated) 0. Prepare for Microsoft Exam —and help demonstrate your real-world mastery of the skills needed to develop Microsoft Azure solutions.
|Language:||English, Spanish, French|
|ePub File Size:||22.36 MB|
|PDF File Size:||10.17 MB|
|Distribution:||Free* [*Sign up for free]|
Exam Ref Developing Microsoft Azure Solutions, 2nd About eBook formats Book ; eBook Prepare for Microsoft Exam —and help demonstrate your real-world mastery of Microsoft Azure solution development. Designed for experienced. The Exam Ref is the official study guide for Microsoft certification exams. Featuring concise, objective-by-objective reviews and strategic case scenarios and.
Start reading Book Description Prepare for Microsoft Exam —and help demonstrate your real-world mastery of Microsoft Azure solution development. Designed for experienced developers ready to advance their status, Exam Ref focuses on the critical-thinking and decision-making acumen needed for success at the Microsoft Specialist level. Focus on the expertise measured by these objectives: Design and implement Websites Create and manage Virtual Machines Design and implement Cloud Services Design and implement a storage strategy Manage application and network services This Microsoft Exam Ref: Organizes its coverage by exam objectives Features strategic, what-if scenarios to challenge you Will be valuable for Microsoft Azure developers, solution architects, DevOps engineers, and QA engineers Assumes you have experience designing, programming, implementing, automating, and monitoring Microsoft Azure solutions and that you are proficient with tools, techniques, and approaches for building scalable, resilient solutions Developing Microsoft Azure Solutions About the Exam Exam focuses on the skills and knowledge needed to develop Microsoft Azure solutions that include websites, virtual machines, cloud services, storage, application services, and network services. About Microsoft Certification Passing this exam earns you a Microsoft Specialist certification in Microsoft Azure, demonstrating your expertise with the Microsoft Azure enterprise-grade cloud platform. See full details at: microsoft.
Design and implement applications for scale and resilience Selecting a pattern Implementing transient fault handling for services and responding to throttling Disabling Application Request Routing ARR affinity Objective summary Objective review Answers Objective 1. Thought experiment Objective 1.
Objective review Objective 1. Objective review Chapter 2. Create and manage virtual machines Objective 2. Configure VM networking Configuring DNS at the cloud service level Configuring endpoints with instance-level public IP addresses Configuring endpoints with reserved IP addresses Configuring access control lists Load balancing endpoints and configuring health probes Configuring Direct Server Return and keep-alive Leveraging name resolution within a cloud service Configuring firewall rules Objective summary Objective review Objective 2.
Design and implement VM storage Planning for storage capacity Configuring storage pools Configuring disk caching Configuring geo-replication Configuring shared storage using Azure File storage Objective summary Objective review Objective 2. Thought experiment Objective 2. Objective review Objective 2. Objective review Chapter 3.
Design and implement cloud services Objective 3. Design and develop a cloud service Installing SDKs and emulators Developing a web or worker role Design and implement resiliency Developing startup tasks Objective summary Objective review Objective 3. Configure cloud services and roles Configuring instance size and count Configuring auto-scale Configuring cloud service networking Configuring local storage Configuring multiple websites in a web role Configuring custom domains Configuring caching Objective summary Objective review Objective 3.
Deploy a cloud service Packaging a deployment Upgrading a deployment VIP swapping a deployment Implementing continuous delivery from Visual Studio Online Implementing runtime configuration changes using the management portal Configuring regions and affinity groups Objective summary Objective review Objective 3. Monitor and debug a cloud service Configuring diagnostics Profiling resource consumption Enabling remote debugging Enabling and using Remote Desktop Protocol Debugging using IntelliTrace Debugging using the emulator Objective summary Objective review Answers Objective 3.
Thought experiment Objective 3. Objective review Objective 3. No reviews were found. Please log in to write a review if you've read this book. Login Join. Time to read. Focus on the expertise measured by these objectives: Organizes its coverage by exam objectives Features strategic, what-if scenarios to challenge you Assumes you have experience designing, programming, implementing, automating, and monitoring Microsoft Azure solutions, and are proficient with tools, techniques, and approaches for building scalable, resilient solutions About the Exam Exam focuses on skills and knowledge for building highly available solutions in the Microsoft Azure cloud.
About Microsoft Certification This exam is for candidates who are experienced in designing, programming, implementing, automating, and monitoring Microsoft Azure solutions. See full details at: January 18, Categories: English Publisher: Auto-scale C.
Perform outside-in monitoring D. Monitor from multiple geographic locations 3. Out of the box, where can website diagnostic logs be stored? Website ile system B.
Azure Storage C. SQL Database D. Email Objective 1. Implement WebJobs WebJobs enables you to run a. These jobs can be scheduled to run on demand, continuously, or on a predeined schedule with a recurrence. Additionally, operations deined within WebJobs can be triggered to run either when the job runs or when a new ile is created in Blob storage or a message is sent to an Azure queue.
With these attributes, the WebJobs SDK knows to invoke your methods so they run based on the appearance of a blob in Blob storage or a message in a queue and then just as easily output a new blob or message as a result of the invocation. WebJobs SDK handles the triggering of your decorated methods, and binding takes care of passing you a reference to the blob or message as desired.
Table describes these attributes by example, the syntax of which is demonstrated later. The name of the blob is captured much like for MVC routes into blobName and can be reused later by the Blob attribute as well as by adding a parameter to the method named blobName. Often used FileAccess. Write ] with the optional FileAccess. Write parameter to output the results of the method to the indicated blob. The following example shows how to create a new, standalone WebJobs project.
WebJobs always run within a website, but they can be created independently of a Websites project. In the list of templates, select Microsoft Azure WebJob. Name your project, and click OK. Open Program. Modify the class Program so that it is public. If you do not make the class public, WebJobs will not detect and run your operations.
Inside Program. WebJobs as well as System. IO because the example will process blobs as Stream objects , as shown in Listing WebJobs; using System. IO; 8. Within Main , add the code to initialize and conigure the behavior of JobHost, in this case, to invoke host. Next, add a method to be invoked by WebJobs as shown in Listing To run this WebJob locally, you need to apply some local coniguration. Open the App. Be sure to replace name and key values with the values from your own storage account.
Using Server Explorer, connect to the storage account you will be using, right-click Blobs, and select Create Blob Container. In the dialog box that appears, for the blob container name, enter input, and click OK. Upload a few. One way to do this is to double-click the container in Server Explorer, and in the document window, click the Upload Blob button which is the third button from the left in the toolbar at the top of the document window , and select the.
Finally, press F5 or click the Debug menu, Start Debugging. RenameFile because New blob detected. To conirm the iles were copied to the output container, open the container in the tool of your choice you can use the Server Explorer approach for this also. NET Web Application , this makes it dificult to scale the WebJob independently of the website and also introduces the potential for resource contention between the website handling requests and the WebJob performing its processing.
Additionally, within a Visual Studio solution, it is possible to associate a WebJob with a web project at a later time if you consolidate the deployment.
Therefore, it is pragmatic to start your WebJob development as a standalone console project. In the dialog box that appears, sign in if you have not done so already, and then click New to create a new website for this WebJob.
In the Create A Site On Microsoft Azure dialog box, provide a site name for the new website, and then specify a subscription and region as desired. Before your WebJob will run, you need to navigate to the website you just created using the portal and add the same two connection strings AzureWebJobsStorage and AzureWebsJobsDashboard you previously added to App.
After you apply the connection strings, upload some. Look for updates to the portal that provide more functionality and deliver features similar to the Azure WebJobs Dashboard. Scheduling WebJobs WebJobs can run on-demand when run from the portal , continuously and possibly in response to input from a Storage blob, Storage queue, or Service Bus queue , on a scheduled date and time, or at certain recurring intervals within a speciied date range.
The schedule to use depends on your scenario. Scenarios that run only occasionally may work best as on- demand; scenarios that run in response to input from storage or service queues should run continuously; others may need to be scheduled according to the calendar. In the previous example, the WebJob was conigured to run continuously. If instead you want the WebJob to run on a schedule, you can complete the following steps to reconigure it in Visual Studio and then re-deploy: With the project open in Visual Studio, open Solution Explorer.
Expand the project, and then expand Properties. Right-click the ile named Webjob- publish-settings.
You will re-create this ile later using the Add Azure WebJob dialog box. Specify a recurrence, a starting date and time, and, if you selected Recurring Job as recurrence, specify an ending date and time and a recurrence pattern for example, Recur Every 1 Day.
Click OK to re-create the Webjob-publish-settings. Click Publish to deploy the WebJob with the new schedule. You certainly can hand-edit this. In fact, Visual Studio provides Intellisense support to help you edit it. However, it is much easier to use the Add WebJob dialog box to get all the settings correctly serialized to the.
You are considering using a WebJob to perform background processing. You would like the job to wake up and start processing based on a certain event.
What are the out-of-box options for triggering the job? If instead of waking up to an event, you want to set up batch processing at night. What are your options with WebJobs? They can only be triggered by a queue message.
They must be deployed with a web application. They can only be written in C. All of the above. A recurring WebJob can be conigured to recur every how often? Second B. Minute C. Hour D. Day E. Week F. Month Objective 1. A WebJob can be triggered as a result of which of the following? A new blob added to a container B. A new message in a storage queue C. An on-demand request D.
A SQL trigger Objective 1. Conigure websites for scale and resilience Azure Websites provides various mechanisms to scale your websites up and down by adjusting the number of VM instances serving requests and by adjusting the instance size. You can, for example, increase scale up, or more precisely, scale out the number of instances to support the load you experience during business hours, but then decrease scale down, or more pre- cisely, scale in the number of instances during less busy hours to save costs.
Websites enables you to scale the instance count manually, automatically via a schedule, or automatically according to key performance metrics.
Within a datacenter, Azure will load balance trafic between all of your website instances using a round-robin approach. You can also scale up a website by deploying to multiple regions around the world and then utilizing Microsoft Azure Trafic Manager to direct website trafic to the appropriate region based on a round robin strategy or according to performance approximating the latency perceived by clients of your website.
Alternately, you can conigure Trafic Manager to use the alternate regions as targets for failover if the primary region becomes unavailable. In addition to scaling instance counts, you can manually adjust your instance size. For example, you can scale up your website to utilize more powerful VMs that have more RAM memory and more CPU cores to serve applications that are more demanding of memory consumption or CPU utilization, or scale down your VMs if you later discover your require- ments are not as great.
You do not need to provision more than one instance to beneit from this SLA. Coniguring auto-scale by schedule existing portal To conigure auto-scale by schedule in the management portal, complete the following steps: Navigate to the Scale tab of your website on the management portal accessed via https: Scroll down to Capacity.
Click Set Up Schedule Times. In the dialog box that appears, deine the schedule: Azure will infer the remaining hours are nighttime. Under Time, adjust the time for morning and evenings. Specify a user-friendly name for the date range, a start date and time, and an end date and time. You can add multiple date ranges as long as they do not overlap.
When you inish deining schedules, click the check mark to make these schedules conigurable on the Scale tab. Use the Instance Count slider to adjust the target number of instances for your website during that schedule.
Repeat the previous two steps as necessary for any other schedules you have deined. Click Save to save both the newly deined schedules and your instance count conigurations for each schedule.
Coniguring auto-scale by schedule Preview portal It is not currently possible to conigure auto-scale using a schedule with the Preview portal. Coniguring auto-scale by metric Auto-scale by metric enables Azure to automatically adjust the number of instances provi- sioned to your web hosting plan based on one or more conigured rules, where each rule has a condition, a metric, and an action to take in response to the threshold being exceeded.
The performance-related metrics currently available include the following: For each rule, you choose a metric and then deine a condition for it. This condition compares against a threshold. Above this threshold, a scale up action adding instances occurs, and below this threshold, a scale down action removing instances occurs. You also specify the number of instances by which to scale up or down. You can specify both scale up and scale down actions for the same metric by using differ- ent rules.
You can also specify multiple rules, using different conditions, metrics, and actions. The frequency with which these rules are triggered is important to manage—you do not want to constantly add and remove instances because adding instances always takes some amount of time no matter how little.
Scaling frequency and how you choose to stabilize it is within your control. You stabilize scaling by specifying the period of time over which the threshold is computed and setting a cool-down period that follows a scaling operation, during which, scaling will not be triggered again. This is all best explained by example.
Consider a scenario where you start with one instance and you conigure the following rule: Auto-scale adds one instance, increasing your total scale to two instances.
However, assume that the CPU utilization remains at 85 percent for the next 5 minutes, even with the additional instance helping to reduce the average CPU load. Auto-scale will not trigger again for another 15 minutes because it is within the cool-down period for the scaling rule. For example, you can specify that one instance should be added every time the CPU utilization exceeds 80 percent, and only begin to remove one instance when the CPU utilization drops below 50 percent.
This is another way to control the frequency of scaling operations. To specify this pair of rules, complete the following steps: This is the only metric supported by the existing portal. Adjust the Target CPU slider to specify the threshold below which scale down actions will be taken. Do this by entering a value in the left-most text box or by dragging the left-most slider handler.
Repeat the previous step to deine the threshold for the scale up action, using the right-most text box or slider handle to adjust the threshold. Using the Instance Count slider, deine the minimum and maximum number of instances that auto-scale can reach at any point in time. Click Save to apply your auto-scale by metric rules.
To conigure your rules, complete the following steps: Scroll down to Usage and click Scale. In the Scale blade that appears, under Choose Scale, click Performance. By default, one pair of rules for CPU percentage is already present. To quickly set scale up and scale down thresholds for the CPU percentage metric, leave the over past period and cool down period at their defaults. Adjust the left and right control knobs on the Target Metrics slider to the desired threshold values.
Choose a metric from the Metric drop-down list. On the Scale blade for your web hosting plan, using the Instance Count slider, deine the minimum and maximum number of instances that auto-scale can reach at any point in time. To remove any undesired rules, click the X button to the right of its slider. Click Save to apply your auto-scale rules.
It does not apply directly to an individual website. Changing the size of an instance You can adjust the number of CPU cores and the amount of RAM memory available to your Websites VMs by adjusting the instance size in the existing portal or the by changing your pricing tier in the Preview portal.
In either case, you are adjusting the size of the instances used for your web hosting plan and therefore for all websites that are a part of it. Changing the size of an instance existing portal The existing portal allows you to change the instance size for your web hosting plan by selecting an instance size from the Scale tab of your website. To do so, your website must be in the Basic or Standard web hosting plan tier. From Instance Size, choose the desired instance size.
Changing the size of an instance Preview portal The Preview portal allows you to change the instance size for your web hosting plan by selecting a new pricing tier for it. Scroll down to Usage and click Pricing Tier. In the new blade, click on a pricing tier, and then click Select. It is important to understand that trafic does not low through Trafic Manager to your website endpoint, but rather it is guided to your website endpoint as a result of DNS resolution.
To understand this better, assume you have a website at www. You would conigure your DNS for contoso. When a client, such as a browser, irst tries to browse to www.
Trafic Manager evaluates its coniguration for a viable endpoint effectively evaluating rules and choosing from a list of endpoints you conigured and then Trafic Manager replies with the domain name of the viable endpoint for your website, such as contoso-west. The browser actually sends its request to this IP address.
For a period of time, the IP address to which www. This period is referred to as the time-to-live, or TTL, for the local DNS cache entry, and it controls how long the client will continue to use a resolved end- point—basically, until the TTL expires. When it expires, the client may perform another DNS lookup and, at this point, may learn of a new endpoint domain name and by extension, a new IP address from Trafic Manager.
This means that you can only use Trafic Manager for subdomains, such as www. The fact that the resolution is time-based has a very important implication—at a given point in time, individual clients may resolve to different endpoints for load balancing or failover, but when resolved, clients may not become aware they need to be communicating with a different endpoint until the TTL expires.
If you are using Trafic Manager primarily for failover, you might be tempted to set the TTL to a very low value to ensure clients who had been communicating with a now unreachable endpoint can quickly start communicating with a functioning endpoint.
The minimum TTL allowed is 30 seconds, and you can use this value, but be aware you are cre- ating additional DNS trafic as well as incurring additional Trafic Manager costs to handle the increased load of DNS queries.
Each load balancing method can deine its own TTL, list of endpoints, and monitoring coniguration. The difference between them is primarily in how Trafic Manager chooses the endpoint from the list of endpoints when responding to a DNS query. To do this, Trafic Manager maintains a lookup table called the Internet Latency Table of DNS server IP address ranges, and for each IP range, it periodically collects the round-trip latency from serv- ers in that range to an Azure datacenter region.
It locates the entry in the table that contains the IP address of the DNS server where the DNS query originated and then selects the website endpoint whose datacenter had the lowest latency. You can think of these methods as all having a failover element because if Trafic Manager monitoring detects an endpoint as unhealthy, it is removed from the rotation and trafic is not guided to it.
To conigure Trafic Manager in the management portal, complete the following steps. Choose a load balancing method Performance, Round Robin, or Failover. Click Create to create the basic Trafic Manager proile.
Click the name of the proile in the list of proiles. Click the Endpoints tab. Click Add Endpoints. In the dialog box that appears, under Service Type, select Web Site.
In the Service Endpoints list, select the website endpoints you want to include in this Trafic Manager proile. Click the check mark to complete selecting the endpoints. Click the Conigure tab. Optionally, under Load Balancing Method Settings, change the load balancing method. If you choose the Failover method, modify the order of the endpoints by using the up and down arrows that appear when you hover over a website.
Under Monitoring Settings, choose the protocol http or https to use for monitoring your website endpoints, and specify the port number.
If desired, provide a relative path to monitor. Your website is becoming quite successful, and trafic is growing steadily every day. Before the media frenzy, you are tasked with considering what steps you should take to support the future scalability requirements of your increas- ing trafic.
You are based in the United States. Users in Asia are complaining that the site is sluggish and slow to load. How might you apply Trafic Manager to improve their experience? You have noticed a pattern: How might you conigure auto-scale to optimize your number of instances and, therefore, hosting costs? Auto-scale does not affect instance size. The failover load balancing method is also a feature of which of the following? Failover B. Round robin C. Performance D. All of the above Objective 1.
Which one of the following does auto-scale control? Instance size B. Instance count C.
Instance region D. Instance memory 3. If you have a website set up with Trafic Manager for failover and the primary endpoint fails, what is the minimum amount of time active users will wait to failover to the next endpoint? Design and implement applications for scale and resilience If your website will be accessible to any nominal amount of users, you are likely concerned about its ability to scale to current and future loads, as well as its ability to remain available to those users.
Azure Websites provides a great deal of functionality for providing a scalable and resilient platform for hosting web applications, but how you design and implement your website, and the patterns you follow, equally impact how successful you are in achieving your target scale and resilience. This section focuses on three frequently applied web application patterns in particular: There you can read about the patterns online, download the documentation in PDF form or order a printed copy , and view a poster summarizing all of the patterns.
The following patterns are particularly useful to the availability, resiliency, and scalability of Websites and WebJobs. This is an example of throttling in action. The throttling pattern quickly responds to increased load by restricting the consumption of resources by an application instance, a tenant, or an entire service so that the system being consumed can continue to function and meet service level agreements.
The example in Figure shows a scenario where paying customers get priority when the system is under heavy load. This could be by outright rejecting requests, degrading functionality such as switching to a lower bit rate video stream , focusing on high priority requests such as only processing messages from paid subscribers and not trial users , or deferring the requests for the clients to retry later as in the HTTP case.
Throttling is often paired with auto-scaling; since the time required to scale up is not instantaneous, the throttling can help keep the sys- tem operational until the new resources come online and then raise the soft limit after they are available.
This makes your website more resilient instead of immediately giving up when a throttling excep- tion is encountered. If your web application is a service itself, implementing the Throttling pattern as a part of your service logic makes your website more scalable in the face of rapid increases in load. Retry pattern If your application experiences a short-lived, temporary or transient failure connecting to an external service, it should transparently retry the failed operation.
The most common example of this type of transient failure is connecting to a database that is overloaded and responding to new connection requests by refusing the connection see Figure Attempt 1 SQL Attempt 2 Website Attempt 3 Database FIGURE A website retrying to connect with a database multiple times For applications depending on this database, you should deine a retry policy to retry the connection multiple times, with a back-off strategy that waits an increasing amount of time between retries.
With these deinitions, only after the desired number of attempts have been made and failed does the retry mechanism raise an exception and abort further retry attempts. When your web application is a client of an external service, implementing smart retry logic increases the resiliency of your website because it will recover from transient failures that occur in communicating with the external service. This occurs because the service logic can guide client requests expecting that the client will retry the operation that resulted in a transient failure in the near future.
Circuit Breaker pattern An implementation of the Circuit Breaker pattern prevents an application from attempting an operation that is likely to fail, acting much like the circuit breaker for the electrical system in a house see Figure The circuit breaker acts like a proxy for the application when invoking operations that may fail, particularly where the failure is long lasting.
If everything is working Objective 1. If the number of recent failures invoking the operation exceeds a threshold over some deined period of time, the circuit breaker is tripped and changes to the open state. In the open state, all requests from the application fail im- mediately without an actual attempt to invoke the real operation for example, without trying to invoke the operation on a remote service.
When this cool-down period expires, the circuit breaker switches to a half-open state, and a limited number of trial requests are allowed to low through to the operation while the rest fail immediately, or the code queries the health of the service host- ing the operation.
In the half-open state, if the trial requests succeed or the service responds as healthy, then the failure is deemed repaired, and the circuit breaker changes back to the closed state. Conversely, if the trial requests fail, then the circuit breaker returns to the open state, and the timer starts anew. Implementing transient fault handling for services and responding to throttling Within your website application logic, you implement transient fault handling for services by coniguring how your client code invokes the operations on the service.
NET, you do not have to author the transient fault handling logic yourself—each service provides mechanisms either directly by the client library or the client works in combination with the Transient Fault Handling Application Block.
These ready-made clients include all of the logic for identifying transient failures received from the service including failures resulting from throttling.
If you are using a service that does not provide support for transient fault handling, you can use the Transient Fault Handling Application Block, which provides a framework for encapsulating the logic of which exceptions are transient, deines retry policies, and wraps your operation invocation so that the block handles the retry logic.
Make sure to accept the license prompt. Again, make sure to accept the license prompt. Close the Manage NuGet Packages dialog box. You should have all the references you need added to your project. If you are using ADO. Notice that you must irst create the default Retry Manager, after which you can create a ReliableSql- Connection that respects the retry and back-off settings you specify in the RetryPolicy. You can then use that connection to run whatever commands you desire.
CreateCommand ; cmd. When your EF 6 model is in your project, you need to create a new class that derives from DbConiguration and customizes the execution strategy in the constructor. EF 6 will look for classes that derive from DbConiguration in your project and use them to provide resiliency. To set this, add a new Class ile to your project and add using statements for System. Entity and System. Then replace the class code with the code shown in Listing SetExecutionStrategy "System.
When this coniguration is in place, you can use your model as you normally do and take advantage of the built-in transient fault handling. To read about these, see http: You use the client to access blobs, tables, or queues as you normally would. However, if you would like to tailor the behavior, you can control the back-off strategy, delay, and number of retries. The code in Listing shows an example of how you could alter the delay and number of retries. Although the Exponential- Retry policy is the recommended strategy, you could also use LinearRetry or NoRetry if you want to have a linear back off or no retry at all, respectively.
CreateCloudBlobClient ; blobClient. All subsequent requests from the client are guided to that original website instance, irrespective of how many other instances a website may have available, what the load is on that instance, or even if that instance is available.
ARR can be a useful technology if a lot of state is loaded into memory for any given client and moving that state between server instances is prohibitively expensive or not possible at all. However, its use introduces the problem of statefulness in Websites, and by extension, lim- its the scalability of the system because clients get attached to a particular website instance.
It can also become a problem since users tend to keep their browsers open for long periods of time. In this case, the website instance they originally connected to may have failed, but on their next request, ARR will try to guide them to the unavailable website instance instead of one of the other instances that are available.
It is possible to disable ARR for Websites by modifying the web. You are designing the logic for a REST service you will host in a website and are examining it from a scalability perspective. You are trying to decide between implementing throttling in the service logic versus using auto-scale. Should you choose one over the other? Your service is completely stateless.
Should you disable ARR? Transient fault handling is the pattern used to handle these issues for services. While this can simplify a solution architecture, it can also cause scalability bottlenecks because a few instances may become overloaded.
Which of these is not an example of throttling? Server crash B. Responding with server busy C. Switching over to lower bit-rate streaming D.
Handling high-priority requests differently than low-priority requests when under load 2. If a transient fault is expected to take a long time to resolve for a service operation that is frequently invoked, which pattern might you consider implementing for the client? Throttling B.
Retry C. Transient D. Circuit Breaker 3. After deploying a website that has multiple instances, you discover that one instance in particular seems to be handling most of the load. What is one possible culprit? ARR afinity B.
Throttling C. Transient fault handling D. Retries Objective 1. Thought experiment 1. You can use different deployment slots for testing and staging, and you can ultimately swap between staging and production to complete the deployment. You should ensure that the website is the only one within a web hosting plan. Also, be careful about using deployment slots if your goal is isolation; these will share the resources of the web hosting plan.
Objective review 3.
Correct answer: A website can have up to four deployment slots besides the main slot, not just two. A website can have up to four deployment slots besides the main slot, not just three. A website can have up to four deployment slots besides the main slot. A website can have a maximum of four deployment slots besides the main slot. Correct answers: Websites must share the same subscription. Websites must share the same region.
Websites must share the same resource group. Websites must share the same pricing tier. Web hosting plans cannot be created directly. This would not result in a new web hosting plan. A web hosting plan can only be created as a step in creating a new website or in migrating the website to a new web hosting plan. You should set up the custom domain name irst because it is a prerequisite for requesting the SSL certiicate. While testing, you can use SSL via the endpoint at https: Objective review 1.
Because the certiicate does not identify the subdomain, it becomes possible to lure users to a similarly named website pretending to be yours. Because the private key is used to decrypt all Azure trafic, its compromise would mean compromising your website security—which would not be possible if you had your own certiicate.
Windows PowerShell is supported only on Windows. The cross-platform command line interface xplat-cli would be useful here. The management portal is accessible using a browser on a Mac. Options B and C are valid. This will likely yield a new IP address for the website, so the A record needs to be updated. You should consider using monitoring through the portal and coniguring alerts. You should enable and review the diagnostic logs.
You could also monitor streaming logs to see any traces that happen in real time but after the issue has occurred. You might also try remote debugging within a test environment. This will not disturb visitors since log-streaming displays logs that are collected in the background. When you are stopped in a breakpoint, this will stop the website from responding to all requests, and therefore certainly disturb visitors.
This will not disturb visitors. Event logs are collected in the background without interfering with requests. Application logs are collected in the background without interfering with requests.
The sending of automated alert emails can be enabled along with endpoint monitoring. Auto-scale is not related to endpoint monitoring. Outside-in monitoring can be enabled along with endpoint monitoring. Monitoring from multiple geographic locations can be enabled along with endpoint monitoring. The ile system is a storage location for diagnostic logs. Azure Storage is a storage location for diagnostic logs.
This is not a valid location for diagnostic logs out of the box. This is not a valid location for diagnostic logs.
You can trigger a WebJob with a blob, a message, or on demand using the portal. You can schedule a WebJob to run on schedule with a daily recurrence starting at a speciic time in the evening. WebJobs can be triggered by both queue messages and blobs. WebJobs can be created as standalone executables or scripts.