When optimizing the Web server, it is necessary to adopt a targeted optimization plan according to the situation and characteristics of the real Web application system. First of all, according to different network characteristics: in the local area network, reducing the MTU (Maximum Transmission Unit) value pair can avoid copying data and seeking verification, and by optimizing the select system call or performing calculations in the Socket event processor, the request concurrency can be optimized Management, the use of HTTP1.1 continuous connection, etc. can all improve the system performance, but in the WAN environment, it has no major effect, and some are even the opposite. For example, reducing the MTU of user connections will increase server processing overhead. The use of network delays, bandwidth limitations, and continuous connections using HTTP 1.1 will not have a major impact on server performance in the WAN. In a wide area network, the waiting time of the end user's request depends on the degree of delay with the network and the connection bandwidth limitation. For WAN, soft and hard interrupts occupy a large part of the network processing, so the use of adaptive interrupt handling mechanism will bring great benefits to the server's responsiveness; positioning the server in the core and changing the process-based design to Transaction processing can also improve server performance to varying degrees. Regarding the Web load, in addition to analyzing the characteristics of the Web load in order to better reproduce the real load during the evaluation, it is also necessary to consider the load situation in the network environment where the Web server is located. People not only require servers to meet normal workload requirements, but also maintain high throughput during peak times. However, the performance of servers under high load is often lower than people expect. There are two types of server overload: one is transient overload, that is, the server is temporarily and temporarily overloaded. This situation is mainly caused by the characteristics of the server load. A large number of studies have shown that the network traffic distribution of Web requests is self-similar, that is, the traffic of Web requests can vary significantly over a wide range. This causes the server to be overloaded for a short period of time, but the duration of such a situation is generally very short. One is the overload of the server for a long time. This situation is generally caused by a special event, such as a denial of service attack or a "live lock" phenomenon. The first type of server overload is inevitable, but the second type can be improved by improving the server. Aside from malicious attacks, careful analysis of the server's process of processing packets can reveal that the root cause of the system's performance degradation under overload is the unfair preemption of the CPU during the high-priority processing stage. Therefore, if you limit the CPU occupancy rate of the high-priority processing stage or limit the number of high-priority CPUs, you can alleviate or eliminate the livelock of receiving packets. Specifically, the following methods can be used: 1. Use polling mechanism. In order to reduce the impact of interruption on system performance, the "second half processing" method is very effective when the load is normal, and under high load conditions, this method will still cause livelock, and polling can be used at this time. mechanism. Although this method will cause waste of resources and reduced response speed under normal load conditions, it is more effective than interrupt drive technology when network data frequently reaches the server. Second, reduce context switching. This method is very effective for performance improvement regardless of the situation of the server. At this time, the method of introducing core-level (kerne1-leve1) or hardware-level data flow can be used to achieve this purpose. The core data flow is to forward data from the source through the system bus without passing the data through the application process. In this process, because the data is in memory, the CPU is required to operate the data. The hardware-level data flow is to forward data from the source through a private data bus or to wait for DMA to pass through the system bus without passing the data through the application process. This process does not require the CPU to manipulate the data. In this way, the user thread is not required to be involved in the data transmission process, which reduces the number of times the data is copied and reduces the overhead of context switching. Third, reduce the frequency of interruption (mainly for high load conditions). There are two main methods here: batch interruption and temporary shutdown interruption. Batch interruption can effectively suppress the livelock phenomenon when it is overloaded, but there is no fundamental improvement to the performance of the server; when the system shows signs of receiving livelock, you can use the method of temporarily closing the interruption to ease the burden on the system when the system caches The interrupt can be turned on again when it is available again, but this method will cause packet loss if the receive buffer is not large enough. Web server performance is a key link of the entire Web system, and improving the performance of Web servers is also a topic that people have been paying attention to for a long time. Here, through the analysis of the working principle of the Web server and the existing optimization methods and technologies, it is concluded that the improvement of the performance of the Web server should also be analyzed specifically for specific problems. In a specific application environment, corresponding measures should be taken according to its characteristics. Optimization measures. Yixing Able Ceramic Fibre Products Co., Ltd , https://www.ablegaslogs.com