HHO is a popular swarm-based, gradient-free optimization algorithm with several active and time-varying phases of exploration and exploitation. This algorithm initially published by the prestigious Journal of Future Generation Computer Systems (FGCS) in 2019, and from the first day, it has gained increasing attention among researchers due to its flexible structure, high performance, and high-quality results. The main logic of the HHO method is designed based on the cooperative behavior and chasing styles of Harris' hawks in nature called "surprise pounce". Currently, there are many suggestions about how to enhance the functionality of HHO, and there are also several enhanced variants of the HHO in the leading Elsevier and IEEE transaction journals.
The story behind the idea is so beautiful and simple. Harris hawks can disclose various team chasing patterns based on the dynamic nature of scenarios and escaping patterns of the rabbit. They wait and then attack all together with other hawks from different directions, while the rabbit runs with several zig-zags motions.
From the algorithmic behaviour view point, there are several effective features in HHO:
Escaping energy parameter has a dynamic randomized time-varying nature, which can further improve and harmonize the exploratory and exploitive patterns of HHO. This factor also supports HHO to conduct a smooth transition between the exploration and exploitation.
Different exploration mechanisms with respect to the average location of hawks can increase the exploratory trends of HHO throughout initial iterations.
Diverse LF-based patterns with short-length jumps enrich the exploitative behaviors of HHO when directing a local search.
The progressive selection scheme supports search agents to progressively advance their position and only select a better position, which can improve the superiority of solutions and intensification powers of HHO throughout the optimization procedure.
HHO shows a series of searching strategies and then, it selects the best movement step. This feature has also a constructive influence on the exploitation inclinations of HHO.
The randomized jump strength can assist candidate solutions in harmonising the exploration and exploitation leanings.
The application of adaptive and time-varying compoennts allows HHO to handle difficulties of a feature space including local optimal solutions, multi-modality, and deceptive optima.
Matlab codes of HHO are publicly available here
Java codes of HHO are publicly available here
Python codes of HHO are publicly available here
Latex codes of HHO are publicly available here
Visio files of figures in HHO section are publicly available here
A github project and related repository and wiki is available at here
A live codeocean capsule of HHO code ready to run online is available at here
You can also check researchgate to find these files here
You can download the paper from here
You can download the extended file of the published paper from here
If you do not have any access to Sciencedirect, please drop Dr. Ali Asghar Heidari an e-mail here and he will send you the paper.
If you have any question regarding the HHO or you need any help in your research or codes of your modified HHO or any assistant in modeling your problem (objective function) or need any help in writing, idea, plots, or your proposal and manuscript, please simply drop an email here and I will help you online.
I will always be happy to cooperate with you if you have any new idea or proposal on the HHO algorithm. You can contact me in any time. Let’s enjoy finding the optimal solutions to your real-world problems.