Hello,
I am looking for help in making IIS consume *more* resources.
The problem I am facing is I cannot seem to make IIS consume the total amount of resources available to it, regardless of what those resources could be.
I am attempting to profile a web application where potentially several thousand users are polling a PHP script every 2 seconds or so, while I understand this is not efficient when compared to say, sockets, it's the only mechanism I have available for the time being due to previous development constraints.
To do this profiling I spun up a new instance of 2008 R2 Enterprise on our private cloud which is amply resourced, and installed the roles for IIS plus supporting extensions, the software is currently:
* IIS 7.5
* PHP 5.4
* Wincache (for 5.4)
* URL Rewrite
A MySQL server runs on a separate machine and is connected to over a local network, there are about 10 queries per page load although each query returns within 0.002 seconds, I believe waiting for these can be discounted.
The problem comes when I start to put the server under load; for some reason I am hoping someone here can help me with, the processor is limited to an average of around 50% CPU.
If I start with 1 core running at around 2.3 Ghz I get about 83 requests per second with 50% CPU utilization. Then, if I double the amount of CPU cores to 2x 2.3 Ghz I find myself handling around 150 requests per second, but still only using 50% CPU processor.
So on and so forth.
I have experienced much the same thing on straight up hardware (no virtualization) using IIS 7.0, although at the time I dismissed it.
Notes:
* The server I am currently testing on has nothing but IIS installed, has ample RAM assigned to it, the only variable I change during testing is the number of processor cores via vCloud Director before restarting the VM.
* I am running PHP within FastCGI. Max instances is set to 16 although raising or reducing this has little impact upon the total number of requests per second.
* Load testing is based on PHP scripts running on multiple external servers issuing HTTP requests via cURL, this is required due to better simulate more realistic load which includes changing the POST fields to reflect the content of JSON and without it, optimizations to the actual running code would not be profiled correctly. Although each load testing script is single threaded, enough instances of them are running to saturate as many FastCGI processes as I run.
* Running a while (true) loop consumes 100% CPU which suggests the problem is in handling multiple requests.
That's all I have at the moment; any advice on the matter would be greatly appreciated, and even reaching 80% utilization would be a big help.
The problem I am facing is I cannot seem to make IIS consume the total amount of resources available to it, regardless of what those resources could be.
I am attempting to profile a web application where potentially several thousand users are polling a PHP script every 2 seconds or so, while I understand this is not efficient when compared to say, sockets, it's the only mechanism I have available for the time being due to previous development constraints.
To do this profiling I spun up a new instance of 2008 R2 Enterprise on our private cloud which is amply resourced, and installed the roles for IIS plus supporting extensions, the software is currently:
* IIS 7.5
* PHP 5.4
* Wincache (for 5.4)
* URL Rewrite
A MySQL server runs on a separate machine and is connected to over a local network, there are about 10 queries per page load although each query returns within 0.002 seconds, I believe waiting for these can be discounted.
The problem comes when I start to put the server under load; for some reason I am hoping someone here can help me with, the processor is limited to an average of around 50% CPU.
If I start with 1 core running at around 2.3 Ghz I get about 83 requests per second with 50% CPU utilization. Then, if I double the amount of CPU cores to 2x 2.3 Ghz I find myself handling around 150 requests per second, but still only using 50% CPU processor.
So on and so forth.
I have experienced much the same thing on straight up hardware (no virtualization) using IIS 7.0, although at the time I dismissed it.
Notes:
* The server I am currently testing on has nothing but IIS installed, has ample RAM assigned to it, the only variable I change during testing is the number of processor cores via vCloud Director before restarting the VM.
* I am running PHP within FastCGI. Max instances is set to 16 although raising or reducing this has little impact upon the total number of requests per second.
* Load testing is based on PHP scripts running on multiple external servers issuing HTTP requests via cURL, this is required due to better simulate more realistic load which includes changing the POST fields to reflect the content of JSON and without it, optimizations to the actual running code would not be profiled correctly. Although each load testing script is single threaded, enough instances of them are running to saturate as many FastCGI processes as I run.
* Running a while (true) loop consumes 100% CPU which suggests the problem is in handling multiple requests.
That's all I have at the moment; any advice on the matter would be greatly appreciated, and even reaching 80% utilization would be a big help.