This project aimed to replicate the results of Xin-She Yangs paper “A Framework for Self-Tuning Optimization Algorithm,” which details the self-tuning firefly algorithm. The paper builds off of the regular firefly algorithm by using a second firefly algorithm to optimize the hyperparameters of the first one, thus improving the rate of convergence and accuracy of the function. The goal of our study was to verify that this worked as the paper stated, and determine the efficacy of the algorithm at improving convergence times on a wide array of problems.
We tested the self-tuning firefly algorithm on a number of well-known optimization algorithm functions, such as the Ackley Function, Yang’s Forest Function, and Zakharov’s function. In every case, the algorithm was indeed able to find hyper parameters that reduced the number of iterations needed for each function to converge. In each case, using the best-found parameters to run the optimization algorithm on each function resulted in more consistent convergence with several orders of magnitude fewer iterations, with each iteration being over a hundred times faster, when compared to the process using standard hyperparameters. In all, the function put forward by Xin-She Yang was able to dramatically improve performance. To view the rest of our findings, please check out the full report.
Because we were able to show that transfer learning was applicable to knowledge graph link-prediction, more research can be done to isolate the values that remain consistent, and cause the improved training speed. This knowledge can then be leveraged to identify good initialization values for this task, and ideally learn why the values work well.