As we have seen before we can pass the ISSManager a set of parameters to tune the ISS instance.
It is impossible to have a set of values that gives the best performance for any kind of process. So tunning this is almost mandatory for a real-life critical process. Critical processes are mainly the reason for ISS ever being created!
Let's see how you would do it in code:
Properties tuneCfg = new Properties(); // Populate them tuneCfg.setProperty("sample.stats.numberOfGroups", "5"); tuneCfg.setProperty("sample.stats.elementsPerGroup","20"); tuneCfg.setProperty("sample.iss.asynchronicityThreshold", "20"); tuneCfg.setProperty("sample.iss.minimumThreshold", "3"); tuneCfg.setProperty("sample.iss.maximumThreshold", "50"); tuneCfg.setProperty("sample.iss.minimumExecutionTime", "500"); tuneCfg.setProperty("sample.iss.maximumExecutionTime", "5000"); tuneCfg.setProperty("sample.iss.maximumActiveWorkers", "50"); ISS sampleISS = ISSManager.getISS("sample" ,tuneCfg);
The previous example provides the ISSManager with a Properties object with all configurable ISS parameters.
The values ilustrated here are the default values.
We could also read them from a file:
Properties tuneCfg = new Properties(); // Populate them from a file tunnedConfig.load(new FileInputStream("sampleCfg.properties")); ISS sampleISS = ISSManager.getISS("sample" ,tuneCfg);
Sample content of the file:
sample.stats.numberOfGroups = 5 sample.stats.elementsPerGroup = 20 sample.iss.asynchronicityThreshold = 20 sample.iss.minimumThreshold = 3 sample.iss.maximumThreshold = 50 sample.iss.minimumExecutionTime = 500 sample.iss.maximumExecutionTime = 5000 sample.iss.maximumActiveWorkers = 50
This is normally the best practice since there can be great differences between the development, testing, and the several existing production deployment environment servers and their load.
This solution allows for a default tunned configuration to be given while allowing in deployment time to tweak it a little more to each instance.
Now this is what defines the ability of ISS to correctly adapt to the specific load that your test case has. If ISS is not correctly configured it can be incapacitated to adjust to your scenario best performance specs.
Let's describe each parameter.
These parameters control the engine that saved the times of each executed process in order to provide the averages needed for the decision making logic of ISS.
The important point in changing these parameters is that the more samples you have, the longer your average execution will take to be aware of performance shifts, and thus, will take longer to adjust to it. Or the other side, the shorter it is, the more exposed to performance spikes it is. In this case a random spike of much lower or much higher execution time will provoke a response from the ISS adjusting the load ISS allows. This can be preciselly what you want... or on the contrary a very bad side effect.
A quick rule of thumb:
You may wonder why the samples are divided into groups? This is mainly related to the performant algorithm for collecting statistics. Basically each time a group is completed it is added to a group queue, and the order one is discarted.
These parameters control the management of the sync/async behaviour of ISS.
Has you can see, setting the wrong maximum or minimum values in the previous parameters can limit the ISS adjust logic to achieve the best performant parameters. When in doubt you can always try to trust ISS and give it a good broad range of flexibility in the "asynchronicityThreshold" maximum/minimum, but if it does not know the correct execution time values, it's adjustment will never be accurate.
For a Sample Application that tests ISS performance where you can test drive different sets of parameters, go here.