- Home
- Amazon Certifications
- AWS Certified Developer Associate AWS Certified Developer Associate (DVA-C01) Dumps
Pass Amazon AWS Certified Developer Associate Exam in First Attempt Guaranteed!
AWS Certified Developer Associate Premium File
- Premium File 443 Questions & Answers. Last Update: Nov 13, 2024
Whats Included:
- Latest Questions
- 100% Accurate Answers
- Fast Exam Updates
Last Week Results!
All Amazon AWS Certified Developer Associate certification exam dumps, study guide, training courses are Prepared by industry experts. PrepAway's ETE files povide the AWS Certified Developer Associate AWS Certified Developer Associate (DVA-C01) practice test questions and answers & exam dumps, study guide and training courses help you study and pass hassle-free!
EC2 & Getting Setup
14. Bash Scripting
Okay, so I'm actually in a text editor right now. I'm using Text Wrangler, but you can use whatever text editor you want. And we're just going to create a really simple homepage. So we're going to type in HTML, and then we're going to do "Body," and then we're going to do a header, and then we're just going to write "Hello, Cloud Gurus." Then we're going to end our header, end our body, and then we're going to end our HTML and HTML.So that is a very, very simple webpage that just says, "Hello, Cloud Gurus." So I want you to go ahead and save this. Make sure you save it as an HTML page. So I'm going to save this into my documents, and I'm going to call it index dot HTML, and I'm going to go ahead and hit save. And now I'm going to go over to the AWS console.
Okay, so I've logged into the AWS console. The very first thing I want you to do is just change your region over to US East Northern Virginia, and I'll show you why a little bit later. However, it is critical that we all be in the same region. In order to make this work, the next thing we want to do is just go over to S3, and we're going to create an S3 bucket where we're going to store that HTML file that we just uploaded. Go in, create your bucket again, but this time use US standard, which is Northern Virginia, and call it my website bucket or something similar. There's no way that domain name would be free, but I'm going to do a Cloud Guru or something like that. Okay, so there's my bucket—what's been created.
If I go into it, I'm just going to go and upload that HTML file that we created to this bucket. So here it is. Index HTML and go ahead and hit "open" and start uploading. And so that has now been uploaded to my S3 bucket. Remember, all buckets are private by default. So if I actually click on this object and try to open it, I'm going to get an error message saying access is denied. Okay, so now that we have created our bucket and uploaded our object to the bucket, let's go back to the main console. And we just want to go into identity access management. And the reason for this is that we deleted the role in our previous lecture. So we need to recreate a role. So go into our roles, and we're going to create a new role. We'll just call it my S-3 admin access or something, and we'll use the EC-2 role and apply it, and if we just type in "S-3," "S-3 Amazon," and "S-3 full access" and apply that policy to the role, we can go ahead and create it. So there we go.
We've got our roles created. Now we'll go into EC2 and create an EC2 instance with this role attached to it. But we're not going to give it a Bash script just yet because I want to show you the logic of how we can create Bash scripts. So if you go through and just launch an instance, we're going to use an Amazon Linux AMI, we're going to use a TWO micro, and in here we're going to apply our role. So my S has three admin privileges. Now your Bash scripts are always passed through these advanced details, which you put in here. You always start a Bash script with a shebang. So a shebang is simply this number symbol followed by an exclamation mark, followed by our interpreter's path. So our interpreter is basically just going to interpret Bash commands and run them sequentially at root level when this VM first starts up. so I will just do one. Let's do a Yum update without yes and simply copy and paste this into Notepad. So I've just pasted it into my notepad here. I'm going to go back over here, and we'll go ahead and go next. And we're just going to use standard storage.
We're going to call this my testBash script server, or something like that. Go ahead and add your security groups, and then select an existing security group. Select one because we are in a new region and you might not have this particular security group. So if you want to create your own, let's just do it for argument's sake. Call it the "Demilitarized Zone." Web DMZ. And I'm going to allow SSH and port 80 (HTTP) to be open to all of the world. So go ahead and hit "review and launch" and go ahead and launch. And of course, if you haven't been outside of Northern Virginia before, you might have to create a new key pair. So I'm just going to call it my NV key pair and download it. And now I'm going to launch my instance. Okay, so just wait. I'm just going to pause the video and wait for this instance to be assigned a public IP address. And then what I'm going to do is go in and SSH into this instance, and then we'll go from there.
OK, so I've got my public IP address; I'm just going to copy and paste this into my clipboard, and now I'm going to go over to the terminal window. OK, so just navigate over to where you downloaded your key pair. So for me, it was in downloads, and then I actually moved it over to a subdirectory called SSH that I have here. So you can see it here. My Nvkeypair changes your permissions on the key so that you can actually use it. So I chmod 400 my NV key pair, and there we go. After that, all we have to do is type SSH EC2 user, at, the IP address, minus I, and the name of the key pair. So there you have it, hit Enter, and type yes. and you should be able to SSH in. I'm going to elevate my privileges to root sopseudosu, and now I'm going to clear the screen. So all we did in our very first bash script was run a Yum update. So if I just try it again with yum update minus yes and press enter, it works. There should be no updates. And you can see there are no packages marked for updates.
So it has already run that at the root level. So now we'll create a small web server and copy our index HTML file from our S-3 bucket to our EC-2 instance. And every time we enter a command in the shell, we're going to copy and paste that command over to Notepad. And this way, we're going to be able to build out our Bash script manually. So the very first thing I want to do is install Apache. So I'd type yum install Httpd minus yes, and there we have it, it's installed. And then what I would do is either cut and paste it or just type it back into your text file. So here we go. Yum install httpd Yes, and then we go back to the terminal. So we've installed Apache. The next thing we're going to do is start the Apache service so that httpd starts, and then we're also going to make it so that our service will come on in case our EC2 instance reboots itself. So we do that by typing checkconfig httpd on, and there we go. Now we just want to copy that over to our notepad so service httdbd can start, and then I'll just write it in manually here to save some time.
Check the configuration at httpdon. Basically, what we've done so far is we've applied our updates, we've downloaded the Apache service, we've started it, and we've made sure that it will start automatically if our EC2 instance resets. The last thing we want to do is move; we want to copy our file from S3 over to our bar dub dub dub HTML directory. There are a couple of ways you can do this. Let's go to the directory of bar dub dub dub. First of all, type cd bar dub dub dub HTML and hit Enter. And then I'm just going to copy and paste that in here into my Notepad. So let's copy that in here. And now we'd like to copy our S3 bucket to our VanDubDubDub HTML. So we'd type AWSS 3 here, and then the URL would be 3 followed by the name of our bucket. Now, if you've forgotten the name of your bucket, you can just type in AWS S 3 LS. This is also a good way to ensure that the role has been assigned to the EC2 instance. If this doesn't work, double-check that the role was applied to your EC2 instance. So here is my bucket name; it's my website bucket. Acloud Guru. I'm just going to copy that to my clipboard. So now we type AWS S three copies and then S three and then the name of the bucket. and then we'll just do the file itself at this stage. So we want to index HTML and then copy it. Tobar Dubdubdub, forward slash HTML, go ahead and hit Enter, and you can see it says "download." So it's downloaded this file from our S3 bucket to our EC2 instance. So that has worked successfully.
So just copy and paste that command here into your text file editor. Now, if you're getting some kind of versioning error, what that means is that if it's saying it's unsigned, make sure that you've actually done this in the Northern Virginia region. That command will not work in some areas. You'd have to do that command, and then you'd have to specify the region and then the name of the region in which the ECQ instance is actually located. So that trips a lot of students up. And that's why I said at the start of this lecture, "Just make sure you're in Northern Virginia." But this is also why I'm showing you how to create Bash scripts manually, so you can troubleshoot. And when a command finally does work, you can copy and paste that into your text file, and you know that it's going to work the very next time. So we have copied it across; let's just have a quick look and make sure it is actually in our EC2 instance. So we just type in LS.
So there it is, index HTML, and we can go into it and go nano index HTML and just make sure it's there. Yeah, it says "Hello, Cloud Gurus." We'll just make sure HTTP isn't in use. Yep, it's running. So that is literally it. So if we go back to our text wrangler, what this bash script will now do is it will go in, it will update our EC2 instance, it will install Apache, it will start the Apache service, it will make sure the Apache service starts every time this EC2 instance reboots itself, and it will then change our directory. We don't necessarily I mean, you don't actually need this line here. We were just doing it so that we could run the LS command after we finished this. So we can take that out, actually. And then we've got AWS S3, copy S3, and then the name of your website bucket. And then do that to HTML. So this is our Bash script. Now let's go ahead and test this. So we'll go back over to our browser, and I'm just going to go through and I'm going to terminate this instance. So the instance state will be terminated, and a terminal window will appear. Yes, we are lamenting the loss of our EC2 instance. Go ahead and hit launch.
So we want our Amazon Linux AMI, and we want to make sure we've got a T2 micro. And now we want to change our role to give me S3 admin access. And here are our advanced details. We want to just grab the Bash script that we've created. So copy and paste and just put it in here. So we've got our starting point here, we've got our Shebang, and we've got our path to our interpreter. We're running a Yum update. So we're updating the OS, we're installing Apache, and we're starting the Apache service. We're making sure Apache starts if this EC2 instance reboots. And then we're copying our website file from S3 over to our EC2 instance. So let's go ahead and hit Next for storage. Just leave everything at default. We'll call this my website test. Go ahead and hit Next for your security group. Use the existing security group that you just created. So my one is called Northern Virginia, and go ahead and hit launch, and yep, just say you acknowledge it and launch the instance. So now, in theory, if we go back to view instances, we'll wait for this to provision, but we should just be able to enter the IP address into our browser followed by the index HTML and we should see the website.
Hello, Cloud Gurus! And we've taken that website data from S3, where it was protected. You can't go into S-3 and get that because we never made that object public. So we're moving data from S3 into EC2 automatically using a Bash script. So I'm just going to pause the video and wait for this to fire up. So it has fired up, but again, I'm just going to wait a couple of minutes because Bashscripts can take 30 seconds up to one or two minutes to run, depending on your performance. So do give it a couple more minutes before you type in this public IP address. Okay, so grab your public IP address and just copy it over to your clipboard. Okay, just open up a new tab and paste it in there. And there you go. You can see Hello, Cloud Gurus. If you're seeing the Apache landing page, just do a forward slash, index, HTML; it's all the same thing, but it is resolving. So we've already automated the deployment of a web server, and we've actually pulled down the information that we want on that web server from S3 just by using a bash script, so you can see how things start to become a lot easier.
We can start automating our web servers, and then this really starts to lead into autoscaling, which we're going to kick off in two labs from now. In the next lab, we're going to show you how to get some instance metadata. So if you've got the time, join me for the next lecture. Thank you.
15. Installing PHP & Composer
I'm going to go to that address myself, hit the enter button, and now I'm going to open up this bootstrap script. I'm going to take a look at what it does. So here we've got our shebang and the path to our interpreter. So this initiates our bootstrap script, and then we run everything else at root level for when our EC2 instance first makes provisions. So the first thing we're going to do is rerun Yum update Y, which will essentially update the kernel and load all of our security patches onto our operating system. And we're then going to install Apache 2.4 and PHP 5.6.
Now, these are the most current versions as of recording this video. What you might want to do, just to check that these are the most current versions for you, is just type in "Apache current version" or "PHP current version" and click on the first link in Google, and it will tell you whether it's two, four, or six. There is PHP 7 out at the moment, but at the time of recording, that does not work with the AWS SDK. For this lab, we'll be using version 5.6. Then, after PHP, Apache, and Git are installed, we're going to start the Apache service. We're going to make sure the Apache service stays on in case the server reboots. We're going to change our directory to VAR dub dub dub HTML, which is the default directory for Apache. And then what we're doing here is just doing an echo. So we're going to output this line of code, which is just a little PHP script that shows us the PHP information, and we're going to output this to test PHP. So it's going to create a little file called test PHP.
And then once that's done, we're going to use Git to clone a repository that I have on GitHub, and it's the Aclowguru repository S 3. So this will all run on our EC2 instance as we boot it up. So if you go ahead and just copy and paste this bootstrap script into your clipboard, okay, then you're going to need to log into the AWS console, and we'll go over to EC 2. We're going to provision a new instance. I'm going to go ahead and launch the instance. I'm going to use the Amazon Linux AMI, and I'm going to use a T2 micro. Go ahead and hit "next." Now this is quite important; make sure you select the "S3 Admin Access" role that we created in the IAM lab. So we need to have admin access to S3, and we need to assign it to this EC2 instance. And then down here we go to advanced details, and in here we leave our bootstrap script. So we're going to put our bootstrap script in here, and we've run through what each line of code does. Let me go ahead and hit next to add our storage, and then next we'll just leave the storage as default. And I'll refer to this instance as PHP SDK EC 2.
Just to make it nice and simple for me to remember. I'm going to go ahead and hit Next and use an existing security group if you've got one already set up. So, in the Web DMZ, ensure that ports 80 and 22 are open. You don't need to necessarily have it open to the whole world, but it's entirely up to you. I'm going to leave it open to the whole world because this little EC-2 instance is not going to live very long. So I'm going to go ahead and hit review and launch. We go ahead and hit launch, and now you just choose your existing key pair. So I'm going to choose an existing key pair, my EC-2 key, and hit launch. Okay? So that's going to launch our EC2 instance. And, hopefully, we'll be able to tell if our bootstrap script worked simply by visiting the IP address, followed by and then testing PHP. So what I'm going to do now is just pause the video while this EC-2 instance initializes. Okay, so my PHP SDK EC2 instance has been provisioned. So if I click on it, we can see here if I get the public IP address or get it down here. So what I'm going to do is just cut and paste this public IP address.
I'm going to go up to my console and paste this in, then go forward and slash test PHP. And if everything has worked for my bootstrap script, we should see the PHP information page. And there we go. So you can see that PHP has been installed. It's version 5.6, and we're almost ready to go. So what we need to do now is go into this EC2 instance and install the PHP SDK using Composer. So you're going to need to change over to your terminal window and connect to your EC2 instance. Okay, so I'm in my terminal window, and I'm just going to SSH into my EC2 instance. So I'll type SSH EC2-two hyphen user, then at and then the IP address, then minus I and then the name of my key. So it's my EC two-key PEM, and I hit enter. I'm going to type in yes. And now I'm connected to my Amazon Linux. AMI, I'll elevate my privileges to root and run sudo s u. And I'm going to go over to my bar-dub-dub-dub HTML directory, and I'm going to clear the screen. I'm just going to type in LS. So we can see that GitHub has already cloned my three repositories. So we've got three in our directory. We've also got our test.php page.
Now all you need to do is switch over to a browser. Okay, so now that I'm in my browser, I'm going to type in AWS PHP SDK and hit Enter. And you can see the SDK landing page here. So we're just going to click on it, and it gives you a little overview of the PHP SDK. We're going to go across here, and we're just going to select Install AWS SDK for PHP. That's going to take us to the installation page, and it just gives us some instructions. And all you need to do is copy these two lines of code into your terminal. You need to make sure you're in the bar, dub, dub, dub HTML directory. So I'm just going to copy that to my clipboard, go across to my terminal, and I'm going to go ahead and install Composer. And it's done very, very quickly. I'm going to alt-tab back over and do the second line of code. So this is running the composer command to install the latest stable version of the SDK. I'm going to cut and paste that in and go ahead and hit Enter. And it gives us a little warning message about running it at root. But that's okay. This is just a demo, and it's going ahead now, and it is running Composer and installing it. And then that's it. So we have now installed Composer. If we type in LS, you'll see that there are some new directories down the bottom here.
In fact, I'll just clear this screen and type in LS to make things clearer and show that composer JSON composer lock has been installed so far. But if you actually go into vendor, go over to vendor, and if we just type in LS here, you can see there's a file called autoload PHP. Now you need this file to load every time you're going to use the SDK. So, if we take a look at it, autoload PHP, we can see that it basically says require once composer autoload real PHP. So, every time you use a PHP script to access the SDK, you must first run the autoload PHP script. So that's just something to keep in mind if you are a PHP developer, and you basically just do that using the require line, which I'll show you in the next lecture. So we have installed PHP now, we've installed the PHP SDK, we've installed Apache, and we're all set up for the next lecture. So if you've got the time, join me for the next lecture. And if you have any questions, please let me know. Thank you.
16. Using the PHP SDK to access S3
Okay, so I've logged into my EC2 instance, and I'm just in the root directory currently, and I'm going to go over to my VAR dubdub, dub HTML, forward slash S-3 directory. If we go in here, we're going to have a look at some of the files that we have downloaded from GitHub. Now the very first file that we're going to use is the createbucket PHP.So I'm going to enter nano create bucket PHP. Now we are going to go through the code and see what it does. But remember, this isn't a coding course.
This course isn't designed to teach you PHP. It's really just giving you a high-level overview of how you can use the PHP SDK to interact with s three. If you want to go learn PHP, there are plenty of other courses on the subject, but in this one I'm just going to show you how we can use the PHP SDK to interact with a server. The whole purpose of this course is to help you pass the developer exam. So we always stay very on topic with the exam content. So down here, we've just got some text that's not going to be executed. So it's just saying copyright to a cloud guru, and then we've got our connection string. Now here it just says "include, connect to AWS PHP." So that is in the same directory as this, createbucket.php, and we'll go see what's inside that connection string in a second. And what we're going to do here is go ahead and create a unique bucket name.
So it's going to be called a "cloud guru," but it's going to have a unique ID. And then essentially what we're going to do is create a bucket using our create bucketname, which is what this function here does. And then we're going to have some HTML that just says "hello, cloud gurus." And then it'll give us, it'll link to a little logo I've got, and it'll say you've successfully created a bucket called, and then what the bucket name is. And then it's going to ask you to go on to createfilePHP, and it's going to pass the bucket name in that link. So all you have to do is say, "Click here to continue," and on the next page, it's going to basically pass the bucket name to the next page. Now before we go ahead and do that, let's go ahead and look at our connection string. So we want to go to NanoConnect for AWS PHP. Now, if you remember from the last lecture, I said that you will always need the automatic PHP in order to use the AWS SDK.
So I've actually put this into our connection string. So we're just saying "require VAR dubdubdubhtml vendor autoloadPHP," so that's loading up that PHP file, and then we're creating a new client connection, and this client connection is just where we're stating what version we are using, which is always the latest, and what region we're going to be using. So in this case, we're creating the bucket for the east coast of the United States; you can change that to suit whatever region you want to use. So I'm going to exit here and go ahead and review the create file, PHP. So go in and go "nano create file" in PHP. And in here again, you can see that the very first thing that we ask it to do is connect to AWS because we're going to use the AWS SDK, and then we're going to go ahead and basically just create a file. So we get the bucket name first. This was the name of the bucket that we made, the one-of-a-kind name that we made in create bucket PHP. And then we're going to go ahead and create a file name.
So it's going to be called Cloudguru TXT. And then essentially what we're going to do is put in the body of this file, "hellocloud gurus," and then it's going to create another little HTML web page, and it's just going to say "file" and then what the file name is. So here is the key: it has been successfully uploaded to the three buckets. And then, after we've created the file, we'll use this function to actually put the object onto our S3 or into our S3 bucket. So when that has successfully happened, it will say the file has been successfully uploaded to the S3 bucket. What we're then going to do in the next script is go ahead and read the contents of the file. So we're going to go to ReadFile PHP, and we're going to pass the read file, the bucket, and the key. So go ahead and exit this one and enter ReadFile PHP. And so you can see here again, the first thing we do is create our connection string. We have some code to get the variables from our last web page.
So we're getting the bucket name as well as the keyname, and then here's the code to read the file on s 3, so run it. And so here's the HTML that outputs all our variables. So we say what the bucket is, what the object's name is, and what the data is inside the bucket, and then we display our little logo, and then we need to go in and clean up. So we're going in to basically delete our bucket and delete all the objects within our bucket. So we're going to clean up PHP—that's the very last one. So go to nanocleanup PHP, and in here again, you can see we've got our connection string, we get the bucket name and then the file name, and then basically it says that these buckets cannot be deleted unless they are empty. So this is some code to delete our bucket. So we go through or delete the objects in our bucket.
So we're going through and deleting all the objects within that bucket. And then this is the code that tells the user that the file has been deleted. So it's basically just a little webpage saying that an object has been successfully deleted. And then this is the code to delete the bucket ourselves. And then it has this little webpage again that basically says that bucket has been successfully deleted from whatever the bucket name is. And then goodbye, cloud gurus. So that's quite a bit of code to go through, but it's just trying to explain to you what it does. Again, you don't need to know how to code in PHP to pass this exam. You don't even need to be a developer; we supply all the code for you. But all I'm doing is explaining what it is and what it does so you can see it in action. So let's go ahead and take a look at how this all works. Let's go ahead and open our browser. So, if you go to EC2 in your browser, just get the IP address of your instance. So it was this one here, so mine is 525-22710. Then I'm going to go over here and open up a new tab. I'm going to copy and paste that, then I'm going to hit, then S three, and then create a bucket in PHP. And if it all is working well, it means that you have successfully created a bucket called a cloud guru. And then it's just got some random numbers and letters in there to make it a unique bucket name.
Now if we actually go ahead and go over here and go over to S 3, we'll be able to see this bucket. Okay, so here's my bucket. We can see it's a cloud guru and then some random numbers and letters. I'm going to go back ahead and go over to my webpage, so it's basically saying you've successfully created this bucket. Click here to continue. Now if you hover your mouse over it and look down at the bottom left-hand corner of the screen, you can see it's passing some variables. So it's passing the bucket name, which isacademy five, 70, et cetera, et cetera. So I'm going to click this link, and we'll go to the next page. And on this page, we're going to create a file called "cloudguru.txt," which has been successfully uploaded to our bucket. Now before we click and read in the file, let's go back over to the S3 management console and go into our bucket. And we can see here—here is our file. It's got 18 bytes in it and was last modified only a few seconds ago. And if I actually go in and just download this file, we can go take a look.
And there you have it, it's downloaded. I'm going to open it up, and here you can see in it that it says "Hello, Cloud Gurus." So let's go back over to our web page, hit OK or back to the console, and we'll go back over to our webpage, where it says "click here to read your file." So we'll go ahead and read the contents of what's in the file. So here again, we've got the bucket name. The object's name is Cloudguru TXT, and the data in the object is Hello Cloudgurus. So again, this is the SDK going in, interacting with s three.It's pulling down the object and actually reading data within the object. And then, if we want to go ahead and just clean up everything that we've created, you just say "click here" to remove your files and your bucket. And it would now say "object," cloudguru. TXT has been successfully deleted, and bucket ACloud Guru has also been successfully deleted. So let's go ahead and verify that if I run refresh here, I should get an error because I'm in a bucket that doesn't exist. Let's go ahead and have a look here. So if I hit refresh, there is no longer a bucket there. So that's it, guys.
I mean, it's a really quick overview, but essentially what we've shown you is how to install the PHP DK on an EC2 instance and then how to go ahead and just create some basic programmes that interact with the server using the PHP SDK. So this programme essentially created a new bucket. It created a new object and then put it into the bucket. Then it went in, downloaded that object from the bucket, and read the data contained within that object in that bucket. And then the programme went through and deleted the bucket, the object, and all the data within the bucket. So it's a really quick overview. Again, you're not going to need any coding knowledge for the exam. You're not going to be asked to write lines of code or anything like that. But I just want to give you some hands-on, practical examples of how this actually all works. So that's it for this lecture, guys. If you have any questions, please let me know. If not, feel free to move on to the next lecture. Thank you.
17. EC2 Instance Meta-data
Hello, cloud gurus, and welcome to this lecture on instance metadata. In this lecture, we're going to discuss how you can get things like the public IP address of your EC2 instance while you're logged in to EC2 itself. So you can go ahead and read this article, which I'll post in the resources section. However, you will need to know this for the exam, and more specifically, the URL to access your instance metadata. So over here, I've just spun up a new EC2 instance. It's completely spun up from scratch. It's using an Amazon AMI, and I've already signed in. Okay, so we're logged into this EC2 instance. I'm just going to elevate my privileges to root, and I'm going to clear the screen. And now what I want to do is type in this command, which is curl, and then this IP address.
And we need to remember this IP address for the exam. So it's pretty simple; just dial 6925-416-9254. If you're not a numbers person, just remember one six nine, followed by two five four, the lowest number followed by the highest number. And then forward slash, and then it's latest, and then forward slash, and then it's meta hyphen data, and then forward slash, and then you're going to get a whole bunch of options. So you can see we've got the AMIID, we've got the host name, we've got local IP 4, and we've got public IP 4. So why don't we have a look and see how we can get the public IP address? So I'm just going to hit the up arrow, and up here I'm going to type "public." And you can see over here that it's given me my public IP address. So 541-537-6126. Now, that's pretty much all you need to remember for the exam: how to go about getting that public IP address.
The thing to remember is that you're looking for formatted data, not user data, and you want to use the IP address 6925-416-9254. Okay, I'm just going to show you how to use this from a developer's perspective. So if we clear the screen, we're going to need to do some installation. So we're going to install httpd, PHP, and then PHP MySQL, and then just hit yes. Okay, so it's installing, just starting the service, service httpd start, and then the last thing we'll need is git. Install Git, yum. And again, I'm going to clear the screen. I'm going to go to my VAR dub dubdub HTML directory, and in here I'm going to type in the command "git clone https." It should be GitHub.com, a cloud guru. Then it's called metadata. And if we just LS, you'll see the folder there, so we can go to the metadata. And then inside here, you'll see a curl example. So we'll just go take a quick look at the code. Okay, we'll just have a look at the code. Like I said in my introduction, this course is not designed to teach you how to code. It's just designed to help you pass the developer exam.
But I'll quickly go through what a few lines do. So here we've got our public IP variable, and you can see that we're pulling this URL. So it's the 1692-541-6925-fourlatusmetadata, followed by publiciperson number four. And then we're doing a whole bunch of commands, basically using the curl command and then converting the response to a string. But if we go all the way down to the bottom, you can see here where we've got the output here.So that's going to contain the output string. Then we'll print it right here. So we're saying echo the public IP address for your EC, two instances, and then the output variable. So what I'm going to do now is simply open a browser. And you can see here that I've gone to that IP address, "Metadata Curlxample," and I'll just refresh it so you guys can see if you are trying to do this and it's not working. Just remember to make sure that you have enabled HTTP through the firewall and that the HTTPD services are running, but you should have no problems. and it will say the public IP address for your instance is this. So there you go. I hope that makes sense, guys. If you have any questions, please let me know. If not, feel free to move on to the next lecture. Thank you.
Amazon AWS Certified Developer Associate practice test questions and answers, training course, study guide are uploaded in ETE Files format by real users. Study and Pass AWS Certified Developer Associate AWS Certified Developer Associate (DVA-C01) certification exam dumps & practice test questions and answers are to help students.
Why customers love us?
What do our customers say?
The resources provided for the Amazon certification exam were exceptional. The exam dumps and video courses offered clear and concise explanations of each topic. I felt thoroughly prepared for the AWS Certified Developer Associate test and passed with ease.
Studying for the Amazon certification exam was a breeze with the comprehensive materials from this site. The detailed study guides and accurate exam dumps helped me understand every concept. I aced the AWS Certified Developer Associate exam on my first try!
I was impressed with the quality of the AWS Certified Developer Associate preparation materials for the Amazon certification exam. The video courses were engaging, and the study guides covered all the essential topics. These resources made a significant difference in my study routine and overall performance. I went into the exam feeling confident and well-prepared.
The AWS Certified Developer Associate materials for the Amazon certification exam were invaluable. They provided detailed, concise explanations for each topic, helping me grasp the entire syllabus. After studying with these resources, I was able to tackle the final test questions confidently and successfully.
Thanks to the comprehensive study guides and video courses, I aced the AWS Certified Developer Associate exam. The exam dumps were spot on and helped me understand the types of questions to expect. The certification exam was much less intimidating thanks to their excellent prep materials. So, I highly recommend their services for anyone preparing for this certification exam.
Achieving my Amazon certification was a seamless experience. The detailed study guide and practice questions ensured I was fully prepared for AWS Certified Developer Associate. The customer support was responsive and helpful throughout my journey. Highly recommend their services for anyone preparing for their certification test.
I couldn't be happier with my certification results! The study materials were comprehensive and easy to understand, making my preparation for the AWS Certified Developer Associate stress-free. Using these resources, I was able to pass my exam on the first attempt. They are a must-have for anyone serious about advancing their career.
The practice exams were incredibly helpful in familiarizing me with the actual test format. I felt confident and well-prepared going into my AWS Certified Developer Associate certification exam. The support and guidance provided were top-notch. I couldn't have obtained my Amazon certification without these amazing tools!
The materials provided for the AWS Certified Developer Associate were comprehensive and very well-structured. The practice tests were particularly useful in building my confidence and understanding the exam format. After using these materials, I felt well-prepared and was able to solve all the questions on the final test with ease. Passing the certification exam was a huge relief! I feel much more competent in my role. Thank you!
The certification prep was excellent. The content was up-to-date and aligned perfectly with the exam requirements. I appreciated the clear explanations and real-world examples that made complex topics easier to grasp. I passed AWS Certified Developer Associate successfully. It was a game-changer for my career in IT!