Particle Swarm Optimization (PSO) is a heuristic that speeds up optimization. In PSO, optimization starts with random sets of parameters, called particles, and finds an optimization solution by adjusting the parameters toward "good" values seen.
I've written a new optimizer for WL Pro and Developer, available as a
downloadable extension. It implements eight different PSO algorithms and can be used with WFO as well as regular optimization.
Full documentation is available in the WealthLab Wiki. It's downloadable from "Extensions", Category "Addin".
Give it a try. It's remarkably fast! Feedback would be appreciated.
Size:
Color:
Thank you for the excellent addition!
Size:
Color:
Looks great and I can't wait to try it! .. but I'll have to wait until next week for the chance. Good work guys.
Size:
Color:
Why use the Particle Swarm Optimizer? It can be a faster substitute for Genetic or Monte Carlo.
Summary: I did a study to compare the PSO Optimizer to the Genetic and Monte Carlo optimizers. PSO found slightly better optimum parameter sets in less time. The average run time for Genetic and Monte Carlo optimizers was 5.9 minutes; the average run time for PSO was 3.7 minutes. The table below can also be used to compare the eight PSO algorithms, in terms of maximum APR% found and run time. The highest APR% was found by PSO algorithm “Fully Informed(FIPS)/GA”, 16.30%. The fastest run time, 1.4 minutes, was achieved by “Fully Informed(FIPS)”, while still finding a respectable 15.96% APR (better than Genetic or Monte Carlo).
Test details: I used a strategy having five parameters, optimizing 97 Symbols for a six-year backtest. The Start/Stop/Step values would have resulted in 3,006,366 runs for the exhaustive optimizer. All optimizers here used 259 runs of the strategy or fewer (Calcs column). Because Genetic and Monte Carlo (and also PSO) are random, I did two runs each of Genetic and Monte Carlo. Results were similar. To make the runs roughly equivalent, I set up each optimizer to target 250 calculations
Optimizer SettingsGenetic: Population 25 for 10 generations
Monte Carlo: 10 runs of 25 tests
PSO: 10 iterations of 25 particles
CODE:
Please log in to see this code.
Sample PSO “Progress Log”CODE:
Please log in to see this code.
Size:
Color:
This is wonderful!!!! It's funny that I was coming in to the forums looking for a bug-problem I encountered and stumbled into this topic... which totally got me sidetracked as it is of so much interest to me.
For Leonard and the team that aided him with this: Thank you for such a great effort to bring the platform level up with more recent technology.
Can't wait to try it hopefully today or tomorrow!!
A couple questions:
- Are you releasing the source code for this extension, so that it can be improved by the community?
- Just one more thing...I couldn't find it in the extension manager.. Had to install if from the download... Are you guys going to include it in the
extension manager so that people can keep up easily with updates?
Size:
Color:
QUOTE:
- Just one more thing...I couldn't find it in the extension manager.. Had to install if from the download... Are you guys going to include it in the extension manager so that people can keep up easily with updates?
You can already keep up easily with all the updates by switching to "Other" tab or to "Updates". Also it's possible to
enable extension update check on program startup.
As to making it (or any other extension) available for download without visiting the website, the answer is a definite no. We have to distribute a handful of "Fidelity supported" extensions this way but I hope the list will not grow.
Size:
Color:
QUOTE:
Are you releasing the source code for this extension, so that it can be improved by the community?
Not at this time. I've asked that the source not be released due to my considerable time investment. It's currently about 3,000 lines of code (and I probably threw at least that many away in rewrites). And it's complex, implementing eight PSO algorithms with a lot of shared code. Changing one algorithm could have side effects on others. The WL optimizer host, which calls all optimizers, sets the rules under which an optimizer must operate. It is a multi-threaded app, which added considerable complexity to the user interface. Casual changes could affect its stability.
(EDIT)
QUOTE:
and the team that aided him
Eugene was a big help with this, even through my tantrums.
Size:
Color:
Leonard: I have been trying out your PSO, and initially it looks VERY GOOD.
What I like
- Log: The log is JUST RIGHT.
-It outputs VERY appropriate information to use when you are going back and analyziing a run you did in the past:
-the strategy being optimized
-the data set against it is being optimized
-the time frame against it being run
-the position size
-the parameters being optimized and the ranges for each parameter
-the PSO algorithm selected and its arguments
-the total size of the otpimization job at the end (total number of cycles done)
-it strikes just the right balance of summation vs. detail in it the execution summarizing the best values achieved in each iteration
- My results so far, confirm that the algorithm tends to converge much faster than the Genetic Evolution with similar parameters.
Even though a global max solution is not guaranteed (but such is the nature of Particle Swarm optimizations)
Things to improve:
- Log:
-If the user doesn't have the habit of putting the time resolution as part of the Data Set Name, then this would be missed in the log.
It would be nice if besides the data set name, you could also output the time resolution of that data set.
-If only one instrument of a given data set is being used, it would be nice if that is specified, cause otherwise it seems that the
optimization was run on the whole data set.
-Optimization process:
- If you use a combined variable to optimize (such as Recovery Factor, which is essentially a ratio of profit/ drawdown) and
one of the particles hits a zero drawdown solution, the ratio goes to infinite and the rest of the swarm tends to gravitate towards
this seemingly better, but completely wrong solution. Can you make it ignore the infinite valuations of the cost function?
- More importantly, if you are using a variable that should be optimized by minimizing (as oppossed to maximizing profit) such as
Drawdown or Ulcer Index. The algorithm doesn't contemplate an argument to identifiy the direction of the optimization (maximize or
minimize), and so assumes that the cost function should be maximized, which is just wrong. The genetic optimizer also suffers from
this, I have also raised it to Eugene and Cone.
Bottom Line: Congratulations on this great product!!! It is a great addition of current technology!!! Thank you very much for sharing it.
________________________________________
PS.... Eugene, Cone: It would be nice if the forum editor could have bullets and/or indents so as to be able to easily make lists that look good (using whitespace is too messy and it just doesn't work)
Size:
Color:
Jorge,
Your observations are gratifying and very helpful. I like your "Things To improve" and will do a little cost-benefit analysis to see what might be done. Yours is the thoughtful feedback I need.
QUOTE:
1. Can you make it ignore the infinite valuations of the cost function?
2. More importantly, if you are using a variable that should be optimized by minimizing (as opposed to maximizing profit)
These are definitely worth doing. I hadn't thought of the first, but the second has been on my TODO list.
Thanks again.
Len
Size:
Color:
Jorge,
Re: "optimize by minimizing": The Wiki for Particle Swarm Optimization currently first suggests optimizing using "Net Profit", then says, "Not all metrics have been tested; not all metrics may work with PSO." I still like that. Here's why.
Issue 1: I modified my code to handle the "inverted" metric (not as hard as I thought it would be), and tried some tests. That's when I realized that any optimization method has an issue whenever towards zero is better, whether maximizing or minimizing. This is because no trades at all may well produce a zero result (e.g. drawdown, Seykota Lake). In this case, optimization will drive toward no or few trades, rather than a meaningful result. You can see this now using "Max Drawdown %", which is a negative value where higher, "towards zero", is better. I didn't find an existing "inverted" metric that produced usable results. Sidebar: There is a bug giving lots of cryptic displays (intended for me) in Clerc Tribes when there are a lot of zero results. That bug will be fixed next release.
Issue 2: Walk-forward optimization (WFO) already implements the inverted metric on its "Control" tab. I'm guessing this was someone's (not so) clever idea to handle the case without touching existing optimizers. This is going to get confusing for the user because, for PSO (and native Monte Carlo), he has two places to set direction, and they may cancel each other out. Sidebar: In WFO, the user is unlikely to be directly aware of effects from Issue 1., because optimizers' custom tabs are hidden.
Bottom Line: The next release may implement the inverted metric, but its usefulness is limited due to the issues above. Of course, new metrics may come along where handling the inverted metric is useful.
Size:
Color:
The Particle Swarm Optimizer has been updated (Release 2014.11.1) and may be downloaded from "Extensions".
Improvements include:
1. Implemented support for optimizing descending metrics
2. Ignores invalid metric calculation results (NaN, MaxValue, MinValue)
3. Displays Data Scale (Daily, 5-Minute,,,) on progress log and graph
4. Validates parameter Start, Stop, and Step values. Stop values cannot be less than Start values. Step values must be positive (or “-1”).
5. Displays a log entry to note if a run was cancelled by the user
Len
Size:
Color:
Dear Sir,
I am a novice user. Installed the optimizer and received an error message when I tried to run it. Probably somethiing Im doing wrong.
Size:
Color:
allanpeace,
Can you provide more information? What was the error message, at least the first line or two?
The Progress Log would also be helpful. To capture that, you may need to cancel the optimization. 1. Click the "Optimization Control Tab." 2. Click the "Cancel Optimization" button (You may need to drag the error display out of the way). 3. Click "OK" on the error display form. 4. Click the "Progress Log" tab. 5. Click the "Copy To Clipboard" button. Finally, paste (ctrl-v) the results here.
Len
Size:
Color:
Here's the error message from a support ticket by allanpeace:
Robert,
You're trying to optimize a rule-based system without Strategy parameters exposed. (Same for a code-based Strategy that doesn't have Strategy parameters). Suggestion: study the User Guide > Strategy Window > Strategy Builder >
Parameter Sliders for the Strategy Builder on how to expose a parameter to the Optimizer.
Len,
Please consider gracefully handling the missing Straps case. Thank you.
Size:
Color:
QUOTE:
Please consider gracefully handling the missing Straps case. Thank you.
In progress. Treatment will be similar to the Genetic optimizer - it will run one (the only) case and exit. Progress log will show, "Error: The Strategy has no Parameters to optimize."
Eugene, I will open a ticket and upload the new version after testing.
Size:
Color:
Extension has been updated. Thanks Len.
Size:
Color:
Hi!
When using PSO to optimize a strategy based on the "K-Ratio" metric the optimizer instead appears to optimize based on "Net Profit". See below:
20:29:22 16Jan2015: Particle Swarm Optimizer (2015.01.1)
20:29:22 Strategy 'LDL2' using (Daily) DataSet 'S&P 400' (400 symbols)
20:29:22 4% equity, 1/16/2007 to 1/16/2012
20:29:22 Maximize 'K-Ratio' with 20 iterations of 100 particles. (Algorithm: Dynamic with Genetic crossover)
20:29:22 Parameter Name Start Stop Step
20:29:22 Limit Mulitpiler-> 0.85, 0.94, 0.01
20:29:22 Bars to Hold-> 2, 15, 1
20:29:22 Exit Profit %-> 1, 20, 1
20:29:25 Optimizing (S&P 400)...
20:29:25 (New High)K-Ratio=5449.21, APR=1.07%, Random Particle 1 {0.86,14,16}
20:29:40 (New High)K-Ratio=39934.81, APR=6.96%, Random Particle 6 {0.86,14,5}
20:29:54 (New High)K-Ratio=56623.14, APR=9.41%, Random Particle 11 {0.87,2,2}
20:30:14 (New High)K-Ratio=94425.11, APR=14.25%, Random Particle 18 {0.85,5,1}
Vince
Size:
Color:
QUOTE:
When using PSO to optimize a strategy based on the "K-Ratio" metric the optimizer instead appears to optimize based on "Net Profit".
Isn't the log clearly indicating new highs found in K-Ratio rather than in Net Profit?
CODE:
Please log in to see this code.
Size:
Color:
QUOTE:
Isn't the log clearly indicating new highs found in K-Ratio rather than in Net Profit?
Those numbers are actually the "Net Profits" numbers that were in the Reults tab that I observed visually. Additionally, you can't get K-ratios anywhere near those values. I suggest that you run the strategy with PSO and observe the results yourself to verify my observations.
Vince
Size:
Color:
QUOTE:
Those numbers are actually the "Net Profits" numbers that were in the Reults tab that I observed visually.
I can't duplicate the problem. Here is the log from my test. "New Highs" are not Net Profits.
CODE:
Please log in to see this code.
Size:
Color:
QUOTE:
08:41:09 #1(New High)K-Ratio=0.07, APR=0.94%, Random Particle 1 {0.648,0.517,1,8,92}
08:41:09 #2(New High)K-Ratio=48.26, APR=0.10%, Random Particle 2 {0.653,0.477,4,5,90}
Those K-Ratios are not possible! Those are "Net Profits" I believe. I suggest you compare those data to the data in the Results tab.
Vince
Size:
Color:
Hmmm. Could it be the K-Ratio calculation under certain conditions? Here's the corresponding "Performance+" report. Note the matching 48.26 value. This is from a Multi Symbol Backtest, no optimization involved. This strategy does not make short trades.
CODE:
Please log in to see this code.
EDIT: There is always a danger, when some parameter combinations produce no trades, that the metric will get a weird, perhaps optimal value. This is not specific to PSO. Can you successfully optimize your strategy using Genetic?
Len
Size:
Color:
QUOTE:
EDIT: There is always a danger, when some parameter values produce no trades, that the metric will get a weird, perhaps optimal value. This is not specific to PSO. Can you successfully optimize your strategy using Genetic?
I have not tried Genetic, but when I used Monte Carlo it does appear to be correct.
Vince
PS. Please try the Strategy the I used.
Size:
Color:
I would be very surprised if Genetic, which chooses new parameter combinations based on prior good results, like PSO, does not suffer the same fate. Monte Carlo does not consider prior results.
Size:
Color:
QUOTE:
Could it be the K-Ratio calculation under certain conditions?
Yes. The formula is disclosed in the open source code: MS123 Visualizers - Download Project Source Code (previous generation 2012.07 demo)
This is correct, but when a strategy has MANY trades the large numbers are not possible.
Vince
Size:
Color:
LenMoz,
EDIT: I have again reproduced the problem:
Step 1: Run PSO using K-Ratio as the target
Step 2: When it completes, close the window
Step 3: Re-open the window for the same strategy; verify that K-Ratio is again the target under Settings
Step 4: Run PSO again
Results from Progress log:
CODE:
Please log in to see this code.
Results from Results tab (condensed):
CODE:
Please log in to see this code.
Size:
Color:
QUOTE:
EDIT: I have again reproduced the problem:
Thanks for pointing it out, now it's easy to reproduce. It doesn't have anything to do with K-Ratio. Any performance metric that spits out values different enough from Net Profit can be picked e.g. Avg Bars Held.
Size:
Color:
I have now been able to duplicate it. That one's for me, looking into it.
Size:
Color:
Thanks! While you are at it, I notice that the first time that I do a PSO the "Results" tab populates in real time. Subsequent re-runs delays the display of the "results" until completion of the run.
Could this be addressed also? Thanks!
Vince
Size:
Color:
QUOTE:
Subsequent re-runs delays the display of the "results" until completion of the run.
I have never experienced this and I repetitively run this optimizer
a lot. So, I can't duplicate it.
BTW I'm testing the fix for the incorrect metric issue.
Size:
Color:
LenMoz,
Here is a process that I find reliably demonstrates the anomaly:
Here is the strategy: LDL2
Here are the steps:
Step 1: Open WLP, Ver 6.8
Step 2: Select dataset "S&P 400"
Step 3: Open strategy (code above)
Step 4: Select "Optimize"; select PSO
Step 5: Begin PSO; wait until 5-10 results appear real time
Step 6: Cancel Optimization
Step 7: Begin Optimization again; At this point there are no results appearing real time
Step 8: Cancel Optimization. At this point results appear.
If this matters, I am currently using a 64-bit version of Vista on this particular machine today.
Vince
Size:
Color:
Vince,
I still can't duplicate it. I run WLP 6.8.10.0 under Windows 7, and, as I said, I do this all the time. This may get technical - don't know your background... There is some fancy code involved because as alluded to earlier, PSO is a called by a multi-threaded WLP process(call it "host"). "Host" controls the instantiating of PSO's custom forms based on you clicking the tabs and PSO code may need to queue the displays until "host" instantiates the form, which doesn't happen until you click the "Progress Log" tab. Your perception is that the tab was already populated, but that only happened when you clicked the tab. The queue is run on the form's HandleCreated event. It was easily as hard to get the forms to work as it was to write the PSO code (this was my first Windows Forms experience).
All that said, it's possible that forms are handled differently under Vista.
BTW You have been excellent in describing your problem and helping find a solution
Size:
Color:
LenMoz,
Sorry that you could not duplicate my results. As you said, it is probably Vista (though I thought that my 7 machine behaved in a similar manner).
Glad to be of help. Was a C programmer over 40 years ago, so I understand the value of "help"! ;)
Vince
Size:
Color:
QUOTE:
All that said, it's possible that forms are handled differently under Vista.
Yes, it's possible. Also, PSO uses ZedGraph control which went abandonware after its author's death. This could be an issue with the control itself under Vista. The good thing is that Vista is officially unsupported both by Microsoft and by us. Therefore let me recommend to use WLP under Windows 7 or 8.
Size:
Color:
Extension has been updated. Thanks Len.
Size:
Color:
Nice job Len! Thanks for all of your efforts!!
Next time you do an update, would you consider putting a "pause" option into the process? That would allow the user to assess the progress of the optimization process without having to terminate it. Thanks!
Vince
Size:
Color:
Vince,
Because I (and Eugene?) see real-time results on the "Progress Log" and "Fitness Graph" tabs, I can't justify that request. I can already assess the progress immediately without terminating.
My experience is that little improvement is seen after the 15th or 16th iteration. So, I usually set "Iterations" to 20, but usually cancel once I observe 3-5 iterations without a new best. The Wiki suggests, "A suggested starting place is 10 to 20 Particles and 8 to 12 iterations."
Because some of my strategies take over two minutes for a single calculation (many symbols, long time period, neural networks), I often click "Copy To Clipboard" and paste to Notepad as it runs, so as not to lose anything.
<light bulb clicking on>Here's something for you to try (low probability?). Click "Copy To Clipboard" and see what you get. Forms events may be triggered to cause a form update. EDIT: This might be a workaround on your Vista machine.
Hope that helps.
Len
Size:
Color:
QUOTE:
Next time you do an update, would you consider putting a "pause" option into the process?
In my opinion, any unnecessary confusing options and buttons should be exterminated, not introduced. The less of them, the better. I also see real-time results and do believe that it's a Vista and/or ZedGraph issue.
Size:
Color:
Thanks Guys for the suggestions! Len, I will try your suggestion as soon as the current process completes. That might be a great solution if it works.
Understand your points, but my suggestion was more along the line of allowing the user to run a backtest of a particular instance to see if there were any undesirable performance effects that would not be see in the metrics.
One other suggestion (I never seen to run out of them! ;) ) is to ignore "zero trade" instances in the optimization process. I had an instance where the process was attracted to those parameters and I ended up terminating the process (I can send you a screenshot that I took). I am not sure how often that might occur, but I have seen that happen in other circumstances in the past.
Vince
Size:
Color:
QUOTE:
... my suggestion was more along the line of allowing the user to run a backtest of a particular instance ...
Let the optimization run and open a second instance of the strategy, Set the parameters, and run it. I'm pretty sure that will work.
Ignoring "zero trade" instances may be possible. I'll know more after I run a simple test.
Suggest away. I'll listen. I meant it in post #1, "
Feedback would be appreciated. "
Len
Size:
Color:
QUOTE:
<light bulb clicking on>Here's something for you to try (low probability?). Click "Copy To Clipboard" and see what you get. Forms events may be triggered to cause a form update. EDIT: This might be a workaround on your Vista machine.
Good call Len! Works like a charm!!
QUOTE:
QUOTE:
... my suggestion was more along the line of allowing the user to run a backtest of a particular instance ...
Let the optimization run and open a second instance of the strategy, Set the parameters, and run it. I'm pretty sure that will work.
Another good call. Thanks!
Was thinking about the impact of eliminating "zero trade" instances and this MIGHT permit optimization by minimizing a metric since the singularity would be removed.
Vince
Size:
Color:
Vince,
Well.... If zero trades are occurring in the optimization, the optimization is likely set up incorrectly . Either the Strategy Parameters' Start and Stop values are set too wide (wasted calc time and reduced odds of finding a solution) or the data set is too small in duration or symbol count (risk of overfitting). Either way, the result is not likely to be very robust.
Minimizing a metric works now (more correctly, the problem under discussion is "optimizing towards zero") if the zero trades condition is avoided.
Even so, I've added it to my TODO list, mostly because it's easy and low risk. I only need to extend the logic that handles invalid metric values, like NaN.
Len
Size:
Color:
Len,
QUOTE:
Well.... If zero trades are occurring in the optimization, the optimization is likely set up incorrectly . Either the Strategy Parameters' Start and Stop values are set too wide (wasted calc time and reduced odds of finding a solution) or the data set is too small in duration or symbol count (risk of overfitting). Either way, the result is not likely to be very robust.
I have noticed this as an issue when the optimization variables are used as Booleans (since there is really no other way to turn on and off behaviors). If the user has them they do often lead to "zero trade" scenarios,
QUOTE:
Minimizing a metric works now (more correctly, the problem under discussion is "optimizing towards zero") if the zero trades condition is avoided.
Even so, I've added it to my TODO list, mostly because it's easy and low risk. I only need to extend the logic that handles invalid metric values, like NaN.
I am looking forward to that capability. Thanks for your effort!
Another subject - the PRNG that you are using. What type is it and how do you initialize it?
The reason I am asking is that I have seen some odd behavior a couple of times and wonder if there is some degree of correlation in it. Probably not an issue, but I thought that I would ask.
Vince
Size:
Color:
QUOTE:
I have noticed this as an issue when the optimization variables are used as Booleans
For this case, I don't usually let the Boolean be optimized (by setting Start/Stop/Step = 0/0/-1 or 1/1/-1). That's because where I've used Boolean parameters, they have a profound effect, dramatically changing the meaning of the other parameters. Intuitively, I would expect PSO (and GA) to not optimize well when this is the case. So, I'll optimize twice to avoid it. It wouldn't matter for Monte Carlo or Exhaustive.
QUOTE:
PRNG that you are using
PRNG, with a few exceptions, is "Random", initialized as
new Random(Guid.NewGuid().GetHashCode()); It deliberately initializes differently with each optimizer run. The Tribes algorithm has some specific cases that need a normally distributed number. There I used
RandomGenerator().NormalDeviate. I found the code here
http://blog.msevestre.ca/2010/12/how-to-generate-gaussian-random-numbers.html. I changed its Random initializations as above.
Len
Size:
Color:
I'm having trouble with this extension. As soon as I click on Particle Swarm under Select an Optimization Method, I get the attached error message regarding not being able to load ZedGraph. Any suggestions?
I'm running Windows 7 x64.
Size:
Color:
Please update the extension again and retry. I forgot to attach the ZedGraph library when publishing. Sorry for the inconvenience.
Size:
Color:
Thanks. I uninstalled it and reinstalled it running Wealth-Lab Pro as and administrator. It now runs as expected.
Size:
Color:
Thank you very much Leonard for all the work you've obviously put into this.
I've been playing around with it a little this evening, and my initial impression is that you are correct that the Fully Informed (FIPS) comes nearest the global optimum and having more Particles is more important than having more Iterations.
Some of my parameters are large numbers in the range of 0-100 and some are much smaller in the range of 0-5. One thing I've noticed is that the algorithm seems to get the larger number parameters right most of the time, but the inability to find the ideal small number parameters are what keeps it from achieving a global optimum. I don't know anything about the theory behind Particle Swarm, but I'm wondering if scaling and normalizing the values of the parameters would improve the algorithm.
Size:
Color:
Kurt,
Thanks for the feedback.
QUOTE:
... my initial impression is that you are correct that the Fully Informed (FIPS) comes nearest the global optimum
About FIPS, the Wiki states
"Fully Informed (FIPS)" This is Mendes' fully informed particle swarm - Particles are attracted to all particle bests, not just their own. Research states this tends to overemphasize the center of the solution space., and that's not quite the same thing. In fact, this stated tendency towards the center of the solution space may be problematic if Start/Stop/Step are not well chosen.
I deliberately ducked the question of which algorithm is "best" because I think it may depend on the strategy. I personally use the Clerc Tribes algorithm to start because I know the underlying theory. When progress towards a solution retreats, the algorithm adds more particles, both based on knowledge attained and also randomly. This adding of new random particles should help avoid local maxima. When progress is good, it removes low-fitness particles, improving speed.
QUOTE:
I'm wondering if scaling and normalizing the values of the parameters would improve the algorithm.
I didn't see anything in the research suggesting this. The optimizer works on each parameter independently, picking points along the continuum from Start to Stop. I don't know if scaling/normalizing would help, but an idea you can try would be to make the number of steps the same. So, the 1 to 100 parameter might have a Step Size of 10, while the 0-5 parameter might have a Step Size of .5. Both then have 10 steps.
Finally, the PSO optimizer is also not intended for parameters that can't be ordered, for example, where "1" means "use
Bollinger Bands" and "2" means "use
Keltner Bands."
Len
Size:
Color:
Thanks for the guidance about using Particle Swarm. Perhaps Fully Informed works well for me, because I always set my parameter limits so they are higher than the global optimum and the Step Size in my parameters results in approximately the same number of steps for each parameter. I'll spend more time comparing the various algorithms to the results from exhaustive runs over the weekend.
You may recall that I would like to find a way to optimize the number of positions a strategy holds, and I think the only way to do that might be through a custom Optimizer.
I know you don't want to share your code for your Particle Swarm Optimizer, which is fine. Would you be willing to share the portion of your code (publicly or privately) which interacts with Wealth-Lab so I don't have to start from scratch?
Size:
Color:
Kurt,
Another piece of guidance about where PSO works best comes from Maurice Clerc's book on page 172, where he states, "
On the other hand, if we place ourselves now in the heart of the field of competence of the traditional PSO, i.e. roughly the continuous and mixed continuous-discrete (non-combinatorial) problems, it is remarkably effective." (the underlining is mine.)
QUOTE:
You may recall that I would like to find a way to optimize the number of positions a strategy holds, and I think the only way to do that might be through a custom Optimizer.
Hopefully you've read this
https://www.fidelity.com/bin-public/060_www_fidelity_com/documents/WLP_Optimizers.pdf where the optimizer interface is explained.
I don't see how you can use an optimizer to optimize number of positions because I don't think you can programmatically alter Percent Of Equity (or any PosSizer parameter) during an optimizer run. An optimizer is a subroutine of WL's optimizer host. It's the host that runs the strategy and the PosSizer, handing back the user's selected scorecard,
post-PosSizer, at each call to the optimizer's "NextRun" routine.
Optimizing number of positions can be done much more simply by using a regular strategy. I built such a strategy using the design I outlined in post #4 on this thread:
http://www.wealth-lab.com/Forum/Posts/Optimizing-Number-of-Positions-35459. It works and successfully optimizes Percent Of Equity.
Len
Size:
Color:
Hi Len!
I noticed a small issue with the new version of PSO that I thought you might have resolved previously. I was running an optimization to maximize "Expectancy (trad)" and it appears that NaNs are causing some issues (see attached).
Vince
Size:
Color:
QUOTE:
small issue with the new version of PSO that I thought you might have resolved previously
I can duplicate it. I will note here if/when there's a fix.
When reporting problems, the Progress Log is particularly helpful to me, especially the top four lines. You can scramble specifics to your comfort level, replacing strategy and data set name, for instance. Click on "Copy To Clipboard" and paste into Notepad (any editor). Things I use for problem diagnosis are highlighted. Thanks.
11:25:54 26Jan2015: Particle Swarm Optimizer (
2015.01.5)
11:25:54 Strategy 'MyStrategy' using
(Daily) DataSet 'MyDataset'
(30 symbols)11:25:54
10% equity, 1/10/2009 to 12/31/202011:25:54
Maximize 'Expectancy (trad.)' with 20 iterations of 12 particles. (Algorithm: Basic PSO with Genetic crossover)
Size:
Color:
Hi Len!
Here it is...
Vince
Size:
Color:
I did some testing to see how close Particle Swarm came to the results achieved by Exhaustive optimization. To do this, I first ran an Exhaustive optimization, and sorted the results by Net Profit, so the highest Net Profit was ranked #1, the second highest, #2, etc. I then ran Particle Swarm using various algorithms on the same data and recorded the rank of the best result found by Particle Swarm.
For my initial tests, I used the midpoint of the suggested values for the Particle Swarm algorithms, 15 particles and 10 iterations. The results are attached as Particle Swarm Results 1. While all of the algorithms were much faster than an Exhaustive optimization, I was not happy with the fact that in at least one of my initial tests, each algorithm was unable to find even the tenth best result.
For my second set of tests, I increased the number of particles to 20 and iterations to 15. The results are attached as Particle Swarm Results 2. That made a significant difference for two of the algorithms -- Basic POS and Cleric Tribes. In terms of accuracy, the Basic POS was the clear winner, finding the global optimum 69% of the time verses 44% for Cleric Tribes. In terms of speed, Cleric Tribes was the clear winner, on average taking 14% of the time to run of an Exhaustive optimization verses 24% for the Basic POS. Based on my testing, I can see no reason to use the other two algorithms I tested -- Comprehensive (CLPSO) and Fully Informed (FIPS).
My tests were on a strategy which had four parameters being optimized, with steps that resulted in 480 runs for an Exhaustive optimization. The Data Set consisted of approximately 500 stocks and the tests were made using various Date Ranges between 1 and 10 years. The results of the Exhaustive optimizations resulted in more or less continuous results, with the differences among the best ranked results, being an increase or decrease by one step in one or two of the parameters.
I realize that the 32 tests I ran isn't a very large sample. My purpose was not to do an exhaustive analysis, but to get feel for what Particle Swarm was capable of doing and how to use it. Having invested over 40 hours of CPU time to run these tests, I think I have the answers I need and wanted to pass along what I've learned to the community.
Size:
Color:
Thank you for the significant effort you made in running your tests and sharing the results. I was surprised by one of your results, and that was how well Basic POS fared against the others. That was the first algorithm implemented, and the others were added because I was actually disappointed by Basic often failing to find the best combinations. As noted earlier in this thread, I use "Tribes" most often, setting number of iterations to 20, but often cancelliing by about 15.
QUOTE:
For my second set of tests, I increased the number of particles to 20 and iterations to 15.
That seems like a good idea since you got a much better result. My experience is that 10 iterations is a bare minimum - and that little if any improvement is seen after the 15th iteration.
My impetus to write PSO was that I was unable to run Exhaustive due to time constraints - some of my strategies take over two minutes for a single calculation. Said another way, 480 combinations would take sixteen hours. I've also been liberal in using parameters (maybe too much so) whereby 480 combinations are not nearly enough. What I like about PSO is that I can use a smaller step size with no execution time penalty. Of course using a smaller step size can lead to overfitting, when one small step can make a big difference in outcome.
Len
Size:
Color:
Hi Len!
Here is a suggestion for you to consider for your next release of PSO - an option to seed the process with the default parameters in the script for the first particle. This would provide an instance of the search space that the user wants to have examined more thoroughly.
I have had several occasions when an interesting parameter combination led to an increase in the fitness function in the very latter stages of the optimization, and I would have appreciated an opportunity to pursue that further.
Thanks for considering!
Vince
Size:
Color:
QUOTE:
an interesting parameter combination led to an increase in the fitness function in the very latter stages of the optimization, and I would have appreciated an opportunity to pursue that further.
I think I know what you mean. Here's what I do. For a new strategy, I guess at Start and Stop, run the optimization for maybe 6 or 7 iterations, and look at the "Results" vis-à-vis the Start and Stop I selected. Where the best fitness is occurring at a Start or Stop value, I slide the range to put that value in the middle of the range and restart the optimization.
To the point of your suggestion, you can explore an interesting set of parameters now, by adjusting the ranges, and I think that's sufficient. Consider, also, that the point may not be near the maximum in all of the search space. By forcing that
relatively good point to be calculated early in the optimization, you will steer the optimization toward what may be only a local maximum.
Size:
Color:
Len,
Any idea what this error means, "Cross-thread operation not valid: Control 'lvErrors' accessed from a thread other than the thread it was created on"? I get it periodically running Cleric Tribes, but not in a way that I can replicate.
Size:
Color:
I've never seen this and "Control lvErrors" is not part of my optimizer. It must be part of the optimization framework.
How far has the optimization gotten? Are you able to copy the Progress Log ("Copy To Clipboard" button) and paste it here?
Size:
Color:
I confirm that lvErrors is part of the Optimization tool (Errors tab). This sounds like an internal error thrown by WL itself.
Size:
Color:
Hi Len!
I have noticed some anomalies with PSO on occasion when I use Optimizer, view the results, use "Set these parameter values..." to push the number to the strategy file.
Here is the header info:
QUOTE:
11:24:50 25Feb2015: Particle Swarm Optimizer (2015.01.2)
11:24:50 Strategy 'Test file' using (Daily) DataSet 'Dow 30' (30 symbols)
11:24:50 32% equity, 1/13/2011 to 1/13/2015
11:24:50 Maximize 'Net Profit' with 200 iterations of 500 particles. (Algorithm: Basic PSO with Genetic crossover)
and here is an example of the anomaly
QUOTE:
Slider1 = CreateParameter("Slider 1", 119.244985284379, 20, 200, 10);
Slider2 = CreateParameter("Slider 2", 200, 20, 200, 10);
Slider3 = CreateParameter("Slider 3", 169.322702529385, 40, 200, 10);
Slider4 = CreateParameter("Slider 4", 141.097222236898, 20, 160, 10);
Slider5 = CreateParameter("Slider 5", 32.4193150674068, 10, 50, 10);
I do not know why I am getting "doubles" that are so far from the default (interger?) steps. Any thoughts?
Vince
Size:
Color:
Interesting. First, let me ask - do the parameters look ok (rounded to 10) as seen on the results tab? If so, then I would guess the problem to be in the "Optimization tool" (WL). Another question - Are Start/Stop/Step as you've shown in the CreateParameter statements or were they altered. My code expects an exact number of "Steps" between Start and "Stop". You might get your ragged result with Start/Stop/Step = 0/100/11, where the range of 100 isn't evenly divisible by 11. EDIT: Looking more carefully at my code, it seems you won't get a ragged result, but rather may never achieve the "Stop" value. In my example, may only reach 99, the highest multiple of 11 not exceeding 100.
Sidebar: An observation from the header (thank you for sharing that) is that your number of particles and iterations are an order of magnitude too high. I typically use 15 particles and 20 iterations and rarely go even as high as 30 particles. PSO isn't intended to mimic Exhaustive, and, yes, it may not find the absolute best parameters, but that's a trade-off for speed. IMHO, the absolute best (Exhaustive) solution may well be overfit.
Size:
Color:
Hi Len!
Your question: Yes, the number look perfectly OK in the Results tab. It is only after the "push" do I see the strange results in the strategy file. All aspects of the files were exactly as I used in the optimizer and the Start... are the way I set them up. No alterations after the fact.
Your sidebar: Yes, I know. I was essentially doing an "exhaustive" with your tool. ;)
Vince
Size:
Color:
Vince, I don't think I can help. The "push" is done by the "Optimizer tool."
Size:
Color:
Eugene / Cone,
I guess this is yours. Do you want me to submit a Support Ticket?
Vince
Size:
Color:
Vince,
If only you can reproduce it with Exhaustive optimizer, then yes.
Size:
Color:
I think I've figured out what was throwing the lv errors. When I change the Position Size or Data Range in a DataSet with a lot of symbols and/or large Data Range, it takes a little while for Wealth-Lab to recalculate the Performance on the new data. I may have clicked Begin Optimization before the recalculation was completed. I've now been very careful to not begin the new optimization until the recalculation is complete, and haven't seed the lv errors.
Size:
Color:
QUOTE:
If only you can reproduce it with Exhaustive optimizer, then yes.
I will see if I can generate a strategy that fails reproducibly
Vince.
Size:
Color:
Len / Eugene,
Thanks for the optimizer..
The key challenge most of us face is robustness (more than speed) of optimization - i.e. optimization that will survive in the walk-forward / out-of-sample test is more crucial than one that runs fast or finds the most optimal point.
- of the 3 algorithms (Particle Swarm, Monte Carlo, Genetic), which optimizer can be tuned to provide a robust optimal result?
- what are the settings for a robust optimization with Particle Swarm optimizer?
thx
Kiran
Size:
Color:
QUOTE:
... which optimizer can be tuned to provide a robust optimal result?
I'm not qualified to compare them, but in my experience the results are similar. Because all three use randomness, there is the risk of the result being a local maximum. Remember to consider outliers in evaluating the "optimal" result. It may be overfit, looking good only because a few trades (luck) influenced the result.
QUOTE:
what are the settings for a robust optimization with Particle Swarm optimizer?
I can only tell you what I do. After six months of using PSO, I find myself using either the "Clerc Tribes" or "Clerc Basic w/GA" algorithm. I usually use 12-15 Particles for "Basic. I set Iterations to 30, but usually Cancel at about 15 if there have not been new highs (lows) in about five iterations.
I would expect the biggest influence to increase "robustness" would to be the number of Particles, where more is better.
Size:
Color:
The Particle Swarm Optimizer has been updated (Release 2015.07.1) and may be downloaded from "Extensions". Most of the changes are related to more recent papers on PSO.
Improvements include:
1. Minor bug fix: Handles "zero trades" calculation result like invalid fitness value
2. Implemented four additional algorithms - GPAC, LPAC, Coordinated Aggregation, and Simulated Annealing PSO. GPAC has tested especially well with my strategies. These are described in the Wiki.
3. Implemented Hammersley algorithm for placement of initial particles (except Tribes). With this change, initial particle positions are now uniformly rather than randomly distributed.
4. Except Tribes, initial velocity is now set per Clerc “SPSO 2011”, such that first movement is less likely to immediately leave the solution space
5. Updated Clerc OEP0 w, c1, and c2 values per (newer) Clerc 2011 paper
6. Minor modifications to Progress Log
7. Updated Swarm Optimizer documentation. See WealthLab Wiki.
Len
Size:
Color:
Do you have some suggested parameters for GPAC, or do the defaults work pretty well?
Size:
Color:
For GPAC, I generally use 30 particles and set iterations to 30. But I usually stop the optimization run after a few iterations of not finding a new best. This is typically around the 12th to 15th iteration. I'm watching for flattening of the graphed best.
I have found it depends on the strategy and the portfolio size. In my research I haven't found a rule for this. Maurice Clerc is a good source of PSO information and even he says, "The suggested value is 40. ... Clearly, there is still a lack of theoretical analysis about how an adaptive swarm size could be determined." Many of my S&P 500 strategies have run times where 40 particles times 10 iterations would run more than 12 hours. So, I use somewhat smaller values for that reason and get reasonable results.
Size:
Color:
Thanks for all the work you keep putting into this.
I've been comparing GPAC to Clerc Tribes on various optimizations. It appears that GPAC is much more prone to getting stuck at local optima than Clerc Tribes, which I have found to do a pretty good job of at least getting near the global maximum. I've tried 20, 30, 40 and 60 GPAC particles, and it doesn't seem to have a noticeable effect on the end result. Adding more particles does, however, substantially decrease the number of required iterations. With 40 particles, it often reaches the best result in about 5 iterations, and with 60 particles, it only takes a couple of iterations.
On the other hand, Clerc Tribes continues to do better with more iterations, often reaching new bests after 20 iterations. While Clerc Tribes takes longer to run (with a lot of iterations), I'd rather use an algorithm that gets me close to the optimal values than one that only gets me to a local maximum.
fyi -- I use Clerc Tribes to get me close to the global maximum and then use brute force to check +- a couple of steps of the Clerc Tribes best for each parameter. While it results in a two step optimization process, it give me a high level of confidence that I am truly getting optimal parameter values.
Size:
Color:
Panache,
Thanks for the feedback. It is very helpful to hear about results using strategies that are likely very different than mine.
Intuitively, Clerc Tribes is less likely to become trapped due to continually adding particles that have different characteristics. The new particles are not just random, but additionally side particles, vertex particles, and "top quintile" particles. Tribes has been my workhorse, too. I don't typically follow up with a narrowed exhaustive optimization but I can see some value in that.
I also agree that GPAC seems to be quickest, and that it can get trapped at a local maximum.
I am studying an "adaptive learning" PSO but haven't decided whether to code it. Conceptually, it seems to have promise in terms of escaping from local maxima, but 1. it may be slow, and 2. it wouldn't fit easily into my existing framework so more work there.
I'll keep you posted.
Len
Size:
Color:
Hi Len,
First, thanks for putting together such great tools.
A question about PSO performance. I have noticed that on some strategies, PSO execution time seems to grow exponentially as the optimization progresses, particularly with Clerc Tribes. A log is below. Is there something I can do to avoid this? CPU usage is 100%; no memory pressure.
Thanks,
Ron
CODE:
Please log in to see this code.
Size:
Color:
QUOTE:
execution time seems to grow exponentially
That's because more calculations are done with each successive iteration. Unlike the other PSO methods, "Tribes" varies the number of particles. It does this based on progress toward a solution. It adds a new tribe every three iterations. The size of the added tribe is based on progress made. It can also add (or subtract) particles to existing tribes based on progress. The net result is usually more particles as the optimization run progresses.
The "Starting iteration" message shows the number of particles/calculations. To see it in your log, at 18:02:31, calced 45 particles in 2 1/2 minutes, at 18:35:35, calc'd 92 particles in 6 minutes. There's not much difference
per calculation.
Bottom line - this is expected behavior with nothing to be done about it.
Size:
Color:
Len: Does PSO use the strategy parameter step increments specified in CreateParameter()? My understanding is that for floating-point variables, other optimizers (or at least GA) ignore the step values from CreateParameter() and instead select values within (min,max) using their own algorithms. Does PSO do this, or is PSO limited to the step increments in CreateParameter()?
Thanks,
Ron
Size:
Color:
PSO is deliberately restricted to step values, so the user can control that aspect of the optimization. Too small a step can result in overfitting. I haven't tested what happens when it is not an integral number of steps from "Start" to "Stop". Actually, the parameter is calculated (and held within the optimizer) without regard to "Step", but it is rounded to Step increment before returning to the WL host, for calculation, by the code below.
CODE:
Please log in to see this code.
Size:
Color:
HI Leonard... hope the beginning of year is being good to you.
It seems that release 6.9.15.0 finally includes the option of selecting parameters out of the optimization.
Any chance you can try to bring back this option into your optimizer?
Size:
Color:
Jorge,
Thank you for your interest. The markets are trying hard to ruin my "beginning of year."
To your question... Depending on how this feature was implemented, it may already work with PSO. If you've downloaded 6.9.15, try it and let me know. I haven't seen new documentation and am not ready to risk a new release.
Otherwise, I need to do some research before I can pursue this. It may be impossible for the same version of PSO to work with WLP and WLD until WLD is also upgraded to 6.9 due to interface changes. I'll expand this answer when I know more.
Len
Size:
Color:
I have tried it and it is ignoring the checkboxes, it still optimizes all the present parameters.
Size:
Color:
Jorge,
The thing is, this change is breaking. Rebuilding this optimizer to support the upgraded API will automatically make PSO incompatible with WL version 6.8 which is here to stay in production environment of many users for quite some time. Upgrades of WL Developer are rarely mandatory (if ever) and some cautious customers go by the motto “If something works, don't change it!” We should be really careful this time.
Size:
Color:
Jorge,
Thank you for checking. Since as-is it's "ignoring the checkboxes", the optimizer will need to be changed to implement this feature. That is impacted by Eugene's info (post #84). I called Fidelity who could not provide any clarification. Bottom line? It may be a while before the checkbox feature is implemented for PSO. It's out of my control.
Len
Size:
Color:
Luckily it still works, so it's no big deal. We'll be patiently waiting for that re-build to enable that functionality.
Size:
Color:
I installed 6.9.15. Per the new User Guide, the parameter checkbox only works with Exhaustive and Monte Carlo...
QUOTE:
Parameter Checkboxes:
Uncheck a parameter to exclude it from full optimization processes such as Exhaustive or Monte Carlo optimizations. When a parameter is not checked, optimizations will apply the parameter's default value shown in the Optimization Control. Other add-in optimizers may require modification for this feature to work.
I experimentally proved that statement. It does not work with "Genetic" optimization.
Size:
Color:
Hi Len!
Quick question - How does PSO handle an "infinity" for a metric value when choosing members for the next iteration? I am HOPING that it ignores them since they invariably are the result of simulations that result in a single trade.
Thanks!
Vince
Size:
Color:
It ignores that result (and some other conditions) :
CODE:
Please log in to see this code.
Size:
Color:
Thanks Len! You seem to have covered all of the bases. Great work!!
Vince
Size:
Color:
Particle Swarm Optimizer references Fidelity.Components (I don't recall why). I run a proprietary version of the optimizer. When WLP is deprecated, how will that need to change?
Size:
Color:
I really do like this optimizer. I hope it doesn't break on WLD.
Size:
Color:
QUOTE:
I hope it doesn't break on WLD
It will run. It has been running on WLD just as long as it has on WLP. Eugene made the presumptive modification.
Size:
Color:
No changes required. Referencing Fidelity.Components is totally OK. It's been part of WLD.
Size:
Color: