All Splunk SPLK-1003 certification exam dumps, study guide, training courses are Prepared by industry experts. PrepAway's ETE files povide the SPLK-1003 Splunk Enterprise Certified Admin practice test questions and answers & exam dumps, study guide and training courses help you study and pass hassle-free!
Designing Splunk Architecture
10. Understanding clustering and High Availability in Splunk
This architecture can be considered a scaled-up version of the larger deployment that we saw in the previous tutorial. This will be one of the crazy things involving, like, high availability and clustering Splunk into your design. Since we've already gone through these scenarios of having high availability and clustering options (which I'm using), by now you should be aware of the benefits of having high availability and clustering options in your organization. Let's see the architecture now. Looking at the architecture in Chile, it looks like total chaos with a lot of components, but as a Splunk architect, you'll be able to see the beauty of Splunk flexibility, scaling up, and its design.
If you look carefully, there are two sites: site one and site two. These are two sites. In real-life scenarios, it will be like the main data center, and this could be your DR or Visa recovery center. For our understanding, let us call them Site One and Site Two. The Site One components resemble the last enterprise architecture we saw in our previous example. This is our site. One architecture. If we just look at our previous discussion, we went through the large enterprise architecture, which is identical to our site One. It is clear that for Hillary Clinton and Cluster, we are considering only large-scale largescale enterprise.
So Site One is our main data center, where all the logs are collected using universal forwarders' syslogs and then passed by our AV forwarders and pushed to the indexer for storage and retrieval while the searchers do their fancy stuff of fetching the data from the indexes and visualising reporting or alerting. The same applies to Dr. or our site Two, which is identical to our main site. But from this diagram, we can see that some of the components, like the deployment server and the licence manager, are communicating with both sides. Having a deployment server talk to all of the components has the huge benefit of allowing you to manage the configuration in one place, as it will talk to all of the components such as searches, indexing, avoiding, and data sources.
Similarly, we know from previous modules that License Manager communicates with all indexes in Site One, Site Two, and any other sites in your architecture to keep track of licence utilization. Since it has very limited functionality, we can make it a cluster master. Also, we can use the licence server itself to function alongside the cluster master, which takes care of making sure that the data has been copied or replicated to the other side and vice versa. Cluster Master's function can be combined with that of a deployment server or a licence manager. Although it is not recommended by Splunk, it doesn't have much of an impact on performance.
Since License Manager has very limited functionality, it can be made a cluster master too, and it is also the duty of the cluster manager to make sure the replication and search factors are met among the cluster members and make sure the cluster is stable. The health of the cluster can also be monitored by the cluster master. Finally, consider some scenarios in which multisite clustering will be beneficial. Assume one of the indexes in my main south falls. So what happens? There is still data between two indices, which should be more than enough. If you have configured the replication factor of two, we will discuss this factor and other factors and how they influence the cluster, the storage, and the high availability part. Let's say we have two copies of this data here.
So if one indexer goes down, there is a very good chance that these two indexes can still give you the results without any impact. As a second scenario, suppose one of the searches goes down. If it is a highly critical search that is clustered into our DR, we should be able to access our DR searchers and continue with our dashboard reports or alerting, whatever it was, without any issues. Similarly, if it is a dedicated searcher, such as one that handles a premium map that is configured only on one searcher and has not been clustered, the alerts or scheduled searches that are configured on this searcher will no longer run. If it has been clustered into our general site for the scheduled searches, the alerts will be run by our searcher at the site. Two.
In the third scenario, let us consider that there are two indexes going on. In that case, our search will be impacted. We will not be getting 100% of the results from the main site indexes, but if we make the same searches point to these indexes, it will be able to retrieve 100% of the data even though these two indexes are down. So at any given point in time, either these three indexes or these three indexes should be able to serve you with 100% of the results. And in the fourth scenario, the deployment server goes down. Consider the deployment server failing, which in this architecture does not have a slave and thus does not have a failure. However, the deployment server differs from the standard architecture for a reason. If you see it standing somewhere in the middle, it's just communicating to all the servers.
However, if the deployment server fails, our Splunk architecture is unaffected because it simply ensures that all instances are operational, and you will be able to modify the configuration, restart them, and ensure that the new configurations are deployed in these types of scenarios. Whereas, even if it goes down, the searcher indexes and heavy folders will have a local copy of their configuration, and it will be able to operate without any issues. Let's say the deployment server goes down and you are unable to restart it. Make sure to restore the backup into a new VM, and you should be able to assign the same IP and have the deployment server up and running in no time. By understanding all this architecture and its benefits, you should be able to design the best-fitting architecture for your organization.
11. Hardware Requirements for Splunk Architecture
As part of our journey to design the best architecture for our organization. The next step is understanding the hardware specifications required for our Splunk components. The link specified here in the document should be able to take you directly there. Let me show you the contents of these so that you will have a better understanding. These are the hardware recommendations that are made by Splunk. The link should be able to take you directly into the requirements page, which shows the recommended hardware specification.
These are for the Unix operating system. Now let us go through them one by one. Let us begin by looking from a searching standpoint. Let's say you have a small, medium, or large business.
Depending on the size of the architecture, the number of Splunk courses ranges from two to 64 at 2 GB. It's like twelve cores for small enterprises and 64 cores for large enterprises. Because each core or it is core intensive, the more courses the better for the search. The search ads are displayed whenever you run a search, and they mainly depend on the available courses on that search.
It's better to have a higher number of courses for our searcher, and looking at indexer hardware, it is highly critical to get a minimum of or more than 100 IOPS. The IOPS should be higher for the indexes since the more IOPS, the better the performance of your indexer. Always remember to never compromise on IOPS, that is, your input and output operations per second, since it is one of the critical values for the performance of your entire Splunk environment.
The next value to consider is storage. From our previous discussions, we know how to get an estimate of storage for our indexes. Now we need to understand what rate level is required or recommended by Splunk to run at optimum performance. It is highly recommended to have it rated for better performance, but if you are able to get our IOPS condition, we should be fine with a rating of five or six. The next step is the RAM specification, which depends again on the size of the deployment.
Considering it's a small, medium, or large system, the ramp can vary from twelve to 64 GB, similar to the course that we have already considered earlier.For the scale of the deployment, it's always better to go for the maximum available ramp. Splunk will be acting like a monster, as you will notice. It will be eating up all the resources that it can get its hands on. And this can be tuned to run at optimum performance by a Splunk administrator or architect.
And also, there are a couple of prerequisites for Splunk that should be taken care of as part of infrastructure provisioning or before installation. Those are U limits as per Splunk recommendations. There are a couple of limits that need to be specified at the OS level so that Splunk operates at optimum performance and also SELinux, also known as Secure Linux. On the Linux platform, it should be disabled or made possible to allow Splunk to run outside of Linux and PHP, which stands for Transparent Huge Pages and is known to cause issues while running Splunk. So it is recommended by Splunk to disable these processes before installation.
12. Capacity Planning for your Architecture
The final step in concluding the document is that the link specified in the document should be able to take you to the official documentation where you can download this manual, which will be handy while finalising the architecture. Let's go through this link. So this is one of the links that is very useful while you are at the final stage of your Splunk architecture. This manual is known as the capacity planning manual. You can click to download this manual as a PDF.
Make sure you're clicking on the top because if you download this, you will probably end up just getting the first page of the documentation on this topic. So make sure you click on "download manual" in Aspedia so that you get the complete manual.
So this is our Capacity Planning Manual, which will be very handy while finalising our Splunk architecture. And we've already discussed the licence set, the number of indexes required, the number of searches, the number of AV files, whether to have a deployment server, whether to have a licence manager, and the hardware requirements for each component of our Splunk, such as RAM, CPU, and IO, as well as storage requirements for indexes and IOPS.
We will summarise everything and decide on the best architecture for the organization. Always remember that IOPS should be greater than 200. The RAM can vary from 12 to 64 GB based on the size of the architecture, and of course, the more, the better for the searchers.
Splunk SPLK-1003 practice test questions and answers, training course, study guide are uploaded in ETE Files format by real users. Study and Pass SPLK-1003 Splunk Enterprise Certified Admin certification exam dumps & practice test questions and answers are to help students.
gone through each question and one more time learn the topics you’re mistaken in