Is it possible to access the current "Iteration Number" from WS during an Optimization? Thanks!
Vince
Size:
Color:
Size:
Color:
Hi Eugene!
Not really... My previous question was altering the Optimization parameters. The current question is I would just like to know which iteration number the Optimization is currently running. I am not looking to alter it. Thanks!
Vince
Size:
Color:
For exhaustive optimization you can try to find it by looking at the current optimization parameters and comparing them with the range since it's known beforehand.
Size:
Color:
I primarily use the PSO, which is the genesis of my question. I assume from your response that it is not possible, correct?
Vince
Size:
Color:
You might be able to setup a global variable that your strategy increments each time the optimizer calls it to get execution iterations. Even sampling a system time stamp might be revealing. But ...
... I'm thinking the goal here is about "model stability," and that's not an optimizer's problem; the model stability is the strategy's problem. One must select the modeling parameters such that they remain orthogonal to each other for all areas of the solution space. The optimizer might be able to control the stiffness (and precision) of the solution method, but it does not control the stability of the model itself.
I would substitute one parameter for another that's more universally orthogonal; that's the right solution to fix model stability.
Size:
Color:
superticker,
Thanks, but it is not a question of model stability or orthogonality, but rather a matter of input parameter importance. There is a natural progression that the contributions of major inputs should be established before the minor ones are incorporated. I currently do this manually after my assessment of their importance with a series of manually-staged optimizations. I am looking to do an "automatically staged" optimization, where the major inputs are added first and the less important ones are added progressively. I have used this in other ML modeling tasks quite successfully.
I appreciate your suggestion of using a global counter (which is quite clever! :) ) and, in the absence of any alternative, will use your suggestion. Thanks!
Vince
Size:
Color:
QUOTE:
There is a natural progression that the contributions of major inputs should be established before the minor ones are incorporated.
That's a good idea. Well, you could have the optimizer optimize one or two initial parameters first. Afterwards, you could rerun the optimizer and have it optimize the remaining parameters. I like that better than the global counter.
There's one optimizer design that weights the inputs buy their covariant matrix much like the general solution to the Kalman filter does. It was discussed on this forum somewhere. Perhaps you've seen it. I have thought about porting it to WL, but it's not really a high priority. But it's an excellent idea.
Size:
Color:
QUOTE:
That's a good idea. Well, you could have the optimizer optimize one or two initial parameters first. Afterwards, you could rerun the optimizer and have it optimize the remaining parameters.
Exactly what I do! :)
QUOTE:
There's one optimizer design that weights the inputs buy their covariant matrix much like the general solution to the Kalman filter does. It was discussed on this forum somewhere.
That would be a welcome improvement over the current pure stochastic approach. Perhaps in WL7.1? ;)
Vince
Size:
Color: