• Order
  • OMR
  • Offers
  • Support
    • Due to unforeseen circumstances, our phone line will be unavailable from 5pm to 9pm GMT on Thursday, 28th March. Please be assured that orders will continue to be processed as usual during this period. For any queries, you can still contact us through your customer portal, where our team will be ready to assist you.

      March 28, 2024

  • Sign In

Disclaimer: This is an example of a student written essay.
Click here for sample essays written by our professional writers.

Any scientific information contained within this essay should not be treated as fact, this content is to be used for educational purposes only and may contain factual inaccuracies or be out of date.

Switched LANs And Network Design Engineering Essay

Paper Type: Free Essay Subject: Engineering
Wordcount: 2391 words Published: 1st Jan 2015

Reference this

The Local Area Networks (LANs) can serve only a small geographic area and there is a limitation for the total number of hosts that can be attached to a single network. In order to communicate between the hosts network devices like hubs and switches are used. Thus a set of LANs that are interconnected by switches will form switched LANs and this lab is all about the performance evaluation of the switched LANs.

OBJECTIVE:

The main goal of this lab assignment is to compare and study the performance of switched local area networks (LANs) that are implemented through the hubs and switches. The performance study of various parameters like throughput, number of collisions, and delay of the network is done through the simulations obtained from the project implementations and the questions are answered from the simulations.

IMPLEMENTATION PROCEDURE:

The implementation of the switched LANs is done through OPNET IT Guru software. The software provided step by step procedures. In the first step, the network is created with the hub configuration having nodes from node_0 through node_15 and the hub is named as HUB_1. The Ethernet connection 10BaseT, operates at 10Mbps and is used to link between the 16 hosts present in the network. The network nodes are then individually configured for the traffic generation parameters and packet generation parameters attributes. As the hub configuration would send the packets received at its input to all the output lines irrespective of the destination the hub configuration network design is as shown below.

The second configuration uses both the hub and the switch. Here a switch is used between the two hubs named HUB_1 and HUB_2. Each hub is connected to 8 nodes and the connection is established through the Ethernet 10BaseT link. The main difference between the hub and the switch is that the switch uses a store and forward mechanism. Thus it will forward the packets received at its input port to the required destinations and sometimes buffers due to the network traffic. The network configuration for such a combination of hub and switch is detailed in the below simulation design.

The above two network configurations are analyzed for the attributes packet generation and traffic generation parameters. Both of them are similar and the packet generation occurred for every 100 seconds. The simulation of the configured network is run for every 2 minutes and the details are captured for further evaluation.

OBSERVATIONS AND RESULTS ANALYSIS:

The simulation results for the configured networks are clearly depicted by the graphs. Fig 3 details the traffic sent over the hub and the hub-switch combination. From the graph we can identify the amount of traffic sent across both the configurations remained the same. Fig 4 shows the packets that are received at the destination for both the configurations. From the graph we can state that the hub-switch configuration is more efficient over the hub alone configuration.

The analysis of the time delay in Fig 5 gives a clear idea about the efficiency of hub-switch configuration. The hub-switch configuration had a delay of 0.020 seconds constantly for a particular load and there is no such criterion for the hub only configuration.

The collision count in Fig 6 depicts that the collision count for the hub alone is nearly 2300 for a constant amount of time and the case with the hub and switch combination is far less which counts to nearly 900. Thus we can say from the graphs that the efficiency of hub and switch network configuration is more when compared to the hub alone network configuration.

QUESTIONS AND ANSWERS:

Question 1: Explain why adding a switch makes the network perform better in terms of throughput and delay.

Answer: From the simulation results it is evident that a switch performs well by dividing the network into smaller collision domains. Thus the throughput could be increased and also the switch provides the network bandwidth of 10 Mbps completely on each of the nodes and hence the delay of the network is reduced effectively.

Question 2: We analyzed the collision counts of the hubs. Can you analyze the collision count of the “Switch”? Explain your answer.

Answer: Yes. The collision count of a switch can be analyzed with the behavior of the switch. The switch uses a store and forward mechanism. Thus it also has the capability of buffering the packets in times of network traffic. These major features of the switch would enable the switch to have no collision. Thus switches are more preferred.

Question 3: Create two new scenarios. The first one is the same as the OnlyHub scenario but replace the hub with a switch. The second new scenarios is the same as the HubAndSwitch scenario but replace both hubs with two switches, remove the old switch, and connect the two switches you just added together with a 10BaseT link. Compare the performance of the four scenarios in terms of delay, throughput, and collision count. Analyze the results. Note: To replace a hub with a switch, right-click on the hub and assign ethernet16_switch to its model attribute.

Answer: The results obtained from the simulation graphs would show the throughput and time delay in the different configurations. From Fig 7, it is seen that hub and switch configuration has the increase in throughput and the hub only configuration has the least throughput among the four. The time delay between the four configurations is analyzed from the Fig 8. Here the two switches configuration has a least time delay because the usage of switches would optimize the delay in time. This is clear from the simulation graph as the time delay is around about 0.002 seconds. The analysis of collision count is already discussed in question 2. When two switches are used by the property of the switches the collision count would be zero. Thus switches are the best configuration devices as compared to hubs.

CONCLUSION:

Thus, the Switched LANs lab assignment gave a very clear vision for the choice of network configuration in a preferred location by learning the hub and switch basics. The simulation results were evident to confirm the better efficiency of a switch than a hub. Hence, the optimization of network by a switch instead of a hub will produce cost efficiency.

LABORATORY 4: NETWORK DESIGN

OVERWIEW:

In this lab we developed a company’s network having 4 departments. Since it was a small network, we used LAN model. We used OPNET Guru to simulate the network design. Once the design was done we assessed the outcome and tried to excel the network by changing some of the hardware, such as using separate servers for database, files and web vs. using one for all three of them. We also compared the same network using low vs. high density cables. Thus, this lab is about the optimization of network.

OBJECTIVE:

The main goal of this lab was to show the learning of the fundamentals of network design. In order to doing this we took into account the users, services, and locations of the hosts.

IMPLEMENTATION PROCEDURE:

To implement this network, we used OPNET Guru as it is one of the greatest tools in Networking. It allows one to simulate the network with any combinations of devices and protocols we have available today. First, we created an empty project and added objects: Application Configuration, Profile Configuration, and a subnet as node_0, node_1 and subnet_0 respectively. Then, we configure services for applications, namely: engineers, researchers, salespeople, and multimedia users. Later, we configure the subnet. Then, we create a 10-workstation star topology LAN. We do this for each of the four departments stated above. Now, we configure all the departments. Now, we configure all three servers based on different services needed for each followed by connecting each department to the subnet.

Finally, we setup the network to test for statistics about the global page and run the simulation to assess the results. Later, we keep the same setup as in Figure 1 above and change the background utilization to be 99 percent to create the Busy Network. And run the simulation again. Finally, we duplicate the Busy Network and replace the low density cables with high density to observe differences. Now you are you are ready to observe the results and analyze the network.

OBSERVATION AND RESULTS ANLAYSIS:

In figure 4.3 bellow, we observed that the response time of the busy network was much higher than the simple network. Also, the system stabilizes very quick compared to the busy network.

From figure 3, high density cables were very helpful in optimizing the results. With high density cables we get busy network producing a response time as if it were a simple network and it stabilizes really fast too.

It was apparent that File server stabilizes a lot faster than both the Database Server and the Web Server. Second, Database Server fluctuate the least. It became apparent that one server replacing the three server has the most load, so it CPU Utilization is the most and it somewhat higher than the Web Server alone.

QUESTION AND ANSWERS:

Question 1: Analyze the result we obtained regarding the HTTP page response time. Collect four other statistics, of your choice, and rerun the simulation of the Simple and the busy network scenarios. Get the graphs that compare the collected statistics. Comment on these results.

Answer: From the HTTP response time for the simple and busy networks, the simple network is represented in blue and the busy network is represented in the color red. The comparison of the two time delay would result in a conclusion that the delay is much less in the simple network as compared to the busy network.

We have considered four other scenarios for comparison and they are depicted in the graph as below:

The first plot shows the Ethernet delay time and from the figure we could notice clearly that the delay in the busy network is higher as compared to the simple network.

The next plot shows the time delay comparisons of the TCP delay. We could understand that the TCP delay for the busy network is fluctuating initially ata higher rate and then it subsides almost even but still at at a higher pace. On the other hand the simple network has a ver low flutuating initial time delay and then it has stabilized after a period of time.

The graph below represents the response time for the DB entry and the comparison between the simple and busy network shows a great deal of difference. The busy network is a bit fluctuating at the initial stage and then it commences to a stable state. But still the delay is high over here. The simple network is having a quite stable time responde from the beginning. This could be observed from the graph below.

The response time for the DB Query is as shown below. In this scenario also the simple network has a quite stable time delay and very small when compared to the initial fluctuating time delay of the busy network.

Question 2: In the Busy Network scenario, study the utilization% of the CPUs in the servers (Right-click on each server and select Choose Individual Statistics ? CPU ? Utilization).

Answer: The CPU utilization during the Busy Network Scenario for the webserver, database server, and file server are as below:

From the graphs above, it is obvious that File server stabilizes a lot faster than both the Database Server and the Web Server. Second, Database Server fluctuate the least if you look at the magnitude on the y-axis.

Question 3: Create a new scenario as a duplicate of the Busy Network scenario. Name the new scenario Q3_OneServer. Replace the three servers with only one server that supports all required services. Study the utilization% of that server’s CPU. Compare this utilization with the three CPU utilizations you obtained in the previous question.

Answer: It is apparent from the graph above that the one server replacing the three servers has the maximum load, so its CPU Utilization is the most and it somewhat higher than the Web Server in the previous scenario. This is because the web server was using the most CPU time in the previous busy scenario.

Question 4: Create a new scenario as a duplicate of the Busy Network scenario. Name the new scenario Q4_FasterNetwork. In the Q4_FasterNetwork scenario, replace all 100BaseT links in the network with 10Gbps Ethernet links and replace all 10BaseT links with 100BaseT links. Study how increasing the bandwidth of the links affects the performance of the network in the new scenario (e.g., compare the HTTP page response time in the new scenario with that of the Busy Network).

Answer: After making all the changes to the links, we have the following results:

From the results above, it is obvious that network response time is much faster and that it stabilizes really fast too. In other words, the Q4_FastNetwork is compatible to Simple Network instead of Busy Network only because we used high density cables.

CONCLUSION:

In conclusion, we learned the basics of designing a network, taking into consideration the users, services, and locations of the hosts. We learned this by using OPNET tool which is great for simulating network systems. We also, learned that using high density cables will optimize the network greatly and can even convert the response time of a network to be so fast as if it were a simple network and it stabilizes the network very fast too. We also noticed that using separate servers for different activities, such as, Database, files and web gives better performance of CPU utilization. Thus, one should consider using high density cables and separate servers for Databases, files and Web to avoid overloading just one server.

 

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: