Skip to content

Algorithm Background

Abdulkadir Özcan edited this page Mar 23, 2026 · 1 revision

Algorithm Background

This page explains how Harmony Search works, what enhancements harmonix adds on top of the original algorithm, how constraints are handled, and how the multi-objective extension operates. It also lists the academic references the implementation is based on.


The original Harmony Search algorithm

Harmony Search (HS) was introduced by Geem, Kim, and Loganathan in 2001, inspired by the improvisation process of jazz musicians. A musician trying to find a perfect harmony either plays a note from memory, plays a note from memory with a slight adjustment, or plays a completely random note. The algorithm maps this directly onto optimisation:

  • The harmony memory is a fixed-size pool of candidate solutions (harmonies), each of which is a complete assignment of values to all design variables.
  • At each iteration, a new harmony is improvised by deciding, for each variable independently, whether to draw from memory or sample randomly.
  • If the new harmony is better than the worst in memory, it replaces it.

Three parameters control this process:

Parameter Abbreviation Role
Harmony Memory Size HMS Number of solutions kept in memory. Larger pools increase diversity but slow convergence.
Harmony Memory Considering Rate HMCR Probability of drawing a variable's value from memory rather than sampling randomly. Controls exploitation vs exploration.
Pitch Adjusting Rate PAR Given that HMCR fires, the probability of further perturbing the chosen memory value. Controls local refinement.

The improvisation step in harmonix

For each variable in definition order, the improviser does the following:

  1. With probability HMCR — memory consideration:
    1. Collect all values for this variable currently in the harmony memory.
    2. Pass them through filter(candidates, ctx) to keep only those feasible given the current context.
    3. If any feasible candidates remain, pick one at random.
    4. With probability PAR — pitch adjustment: call neighbor(value, ctx) to perturb the chosen value.
  2. With probability (1 − HMCR), or if no feasible memory candidates survived the filter — random selection: call sample(ctx).

The context ctx accumulates values as each variable is assigned, so later variables can depend on earlier ones. This is the mechanism that makes dependent spaces work — by the time a dependent variable's filter or sample is called, all its dependencies are already in ctx.

One important design decision: pitch adjustment calls neighbor(value, ctx), not sample(ctx). The common incorrect implementation calls sample on PAR, which discards the memory value entirely and effectively increases the random exploration rate beyond what HMCR alone would suggest. harmonix avoids this.


Enhancement 1 — Dynamic bandwidth narrowing

In the original HS algorithm the pitch adjustment step size (bandwidth) is fixed throughout the run. harmonix decays it exponentially:

bw(t) = bw_max × exp(−ln(bw_max / bw_min) × t / T)

where t is the current iteration and T is max_iter. This was proposed by Lee and Geem (2005) as the Improved Harmony Search (IHS) algorithm. The effect is a natural transition from broad exploration in early iterations to precise local refinement in late iterations, without requiring the user to manually schedule parameter changes.

Only Continuous variables are affected. The bandwidth is injected into the context as ctx["__bw__"] at each iteration; Continuous.neighbor reads it to scale its Gaussian perturbation. All other variable types ignore it and step by their natural unit.


Enhancement 2 — Deb constraint handling

Raw Harmony Search has no principled mechanism for handling constraints. The common approach — adding a penalty term to the objective — works but introduces artificial landscape features (cliffs, plateaus) that can mislead the search.

harmonix uses Deb's constraint handling method (2000), which imposes a lexicographic preference order:

  1. A feasible solution always dominates an infeasible one, regardless of their objective values.
  2. Among infeasible solutions, the one with smaller total penalty is preferred.
  3. Among feasible solutions, the one with better objective value is preferred.

This is implemented in HarmonyMemory's _dominates method, which governs both best_index and worst_index. The practical consequence is that the optimiser always drives toward feasibility first, and only competes on objective value once feasibility is achieved. Penalty terms remain in the objective signature for compatibility, but Deb's rule means a large penalty does not distort the fitness landscape for feasible candidates.


Enhancement 3 — Dependent search spaces

The original HS algorithm treats all variables as independent. harmonix introduces the dependency context mechanism: variables are sampled in definition order, and each receives a dict of previously assigned values. This allows bounds, catalogue filters, and feasibility rules to reference earlier variables directly.

The result is that infeasible designs can be structurally eliminated at generation time rather than penalised after evaluation. See Dependent Spaces for the motivation, benchmark evidence, and practical guidance.


The harmony memory replacement rule

After a new harmony is improvised and evaluated, it is compared to the worst harmony currently in memory using Deb's dominance rule. If the new harmony dominates the worst, the worst is replaced. Otherwise the memory is unchanged.

This means the memory size stays fixed throughout the run and represents a sliding window of the best solutions seen so far. Unlike genetic algorithms, there is no crossover between harmonies — each new candidate is built independently from the memory pool.


Multi-objective extension

harmonix extends Harmony Search to multi-objective problems using a Pareto archive, following the approach of Ricart et al. (2011). The key differences from single-objective HS are:

Archive instead of single best. Non-dominated feasible solutions are collected in a bounded Pareto archive rather than tracked as a single best. The archive is pruned by crowding distance when it exceeds archive_size: the most crowded solution (smallest crowding distance) is removed first, preserving diversity along the front.

Improvisation from the archive. When the archive is non-empty, new harmonies are improvised by drawing base values from archive entries rather than the working harmony memory. This drives the search toward the Pareto front rather than a single optimum.

Crowding distance tournament. When selecting which archive entry to use as the base for improvisation, harmonix runs a small tournament on crowding distance: the least crowded (most isolated) entry wins. This promotes exploration across the full extent of the front.

Feasibility gate. Only feasible harmonies (penalty ≤ 0) enter the archive. Infeasible harmonies remain in the working memory — the search can still learn from near-feasible regions — but they do not contaminate the Pareto front.

Dominance check. A candidate is added to the archive only if it is not dominated by any existing entry. Any existing entries dominated by the new candidate are removed before insertion.


Pareto dominance and crowding distance

Solution a dominates solution b (written a ≻ b) when:

  • a[i] ≤ b[i] for every objective i, and
  • a[i] < b[i] for at least one objective i.

All objectives are assumed to be minimised. To maximise an objective, negate it inside the objective function.

Crowding distance measures how isolated a solution is along the objective axes. For each objective, solutions are sorted and the distance assigned to each interior solution is the normalised gap to its two neighbours. Boundary solutions (best and worst per objective) receive infinite distance. The total crowding distance is the sum across all objectives. A higher crowding distance means more isolated — these solutions are preferred during archive pruning to maintain a well-spread front.


Complexity

Each iteration of the single-objective optimiser performs:

  • O(V × HMS) work for the improvisation step, where V is the number of variables and HMS is the memory size. For each variable, the filter scans all HMS memory values.
  • O(HMS) work for the dominance check to find the worst memory entry.
  • One objective evaluation — typically the dominant cost.

For the multi-objective case, the archive operations add:

  • O(A) for the dominance check against all A archive entries.
  • O(A² × M) for crowding distance computation when the archive overflows, where M is the number of objectives. This is run only when the archive exceeds archive_size and is bounded by the archive cap.

References

Geem, Z. W., Kim, J. H., & Loganathan, G. V. (2001). A new heuristic optimization algorithm: Harmony search. Simulation, 76(2), 60–68.

Lee, K. S., & Geem, Z. W. (2005). A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice. Computer Methods in Applied Mechanics and Engineering, 194(36–38), 3902–3933.

Deb, K. (2000). An efficient constraint handling method for genetic algorithms. Computer Methods in Applied Mechanics and Engineering, 186(2–4), 311–338.

Ricart, J., Hüttemann, G., Lima, J., & Barán, B. (2011). Multiobjective harmony search algorithm proposals. Electronic Notes in Theoretical Computer Science, 281, 51–67.

Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. (2002). A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2), 182–197.


See also

Clone this wiki locally