整定(Tuning)是指为系统或模型选择和调整参数,以优化其性能的过程。在机器学习、优化算法、控制系统等领域中,整定是一个关键步骤。以下是一些常见的整定方法:

1. 网格搜索(Grid Search)

网格搜索是一种简单的参数调优方法,通过遍历给定的参数网格,找到使目标函数(如损失函数)最优的参数组合。

```python from sklearn.model_selection import GridSearchCV from sklearn.svm import SVC

param_grid = {'C': [0.1, 1, 10], 'gamma': [0.001, 0.01, 0.1]} grid = GridSearchCV(SVC(), param_grid, refit=True, verbose=2) grid.fit(X_train, y_train) print(grid.best_params_) ```

2. 随机搜索(Random Search)

随机搜索在参数空间中随机采样,而不是遍历所有可能的组合。这种方法通常比网格搜索更快,尤其是在参数空间较大时。

```python from sklearn.model_selection import RandomizedSearchCV from scipy.stats import uniform

param_dist = {'C': uniform(loc=0, scale=10), 'gamma': uniform(loc=0, scale=1)} random_search = RandomizedSearchCV(SVC(), param_distributions=param_dist, n_iter=100, cv=5, verbose=2) random_search.fit(X_train, y_train) print(random_search.best_params_) ```

3. 贝叶斯优化(Bayesian Optimization)

贝叶斯优化通过构建概率模型来预测参数的性能,并选择最有价值的参数进行评估。这种方法在参数空间较大且目标函数非凸时特别有效。

```python from skopt import BayesSearchCV from skopt.space import RealRangeSearchSpace from sklearn.svm import SVC

param_space = RealRangeSearchSpace(min_value=0.001, max_value=10, name='C') bayes_search = BayesSearchCV(SVC(), param_space, n_iter=32, cv=5, verbose=2) bayes_search.fit(X_train, y_train) print(bayes_search.best_params_) ```

4. 梯度下降法(Gradient Descent)

梯度下降法用于优化连续参数,通过迭代更新参数以最小化目标函数。这在优化深度学习模型的权重时非常常见。

```python import numpy as np

def gradient_descent(f, df, x0, learning_rate=0.01, epochs=100): x = x0 for _ in range(epochs): grad = df(x) x -= learning_rate * grad return x

示例:最小化平方损失函数 L(w) = (w - 3)²

def loss_function(w): return (w - 3) 2

def gradient(w): return 2 * (w - 3)

x0 = 0 optimal_w = gradient_descent(loss_function, gradient, x0) print(optimal_w) ```

5. 粒子群优化(Particle Swarm Optimization)

粒子群优化是一种基于群体智能的优化方法,通过模拟鸟群觅食行为来寻找最优解。

```python import numpy as np

class Particle: def init(self, w, c1, c2, v): self.w = w self.c1 = c1 self.c2 = c2 self.v = v self.position = np.random.rand() self.best_position = self.position self.best_value = float('inf')

def update(self, w, c1, c2):
    r1 = np.random.rand()
    r2 = np.random.rand()
    self.v = w * self.v + c1 * r1 * (self.best_position - self.position) + c2 * r2 * (np.mean(self.position) - self.position)
    self.position += self.v

def update_best(self):
    value = self.value_function(self.position)
    if value < self.best_value:
        self.best_value = value
        self.best_position = self.position

def value_function(x): return (x - 3) 2

示例:优化平方损失函数 L(w) = (w - 3)²

particles = [Particle(w=0.7, c1=1.4, c2=1.4, v=np.random.rand()) for _ in range(30)] for particle in particles: particle.update(0.7, 1.4, 1.4) particle.update_best()

best_position = np.mean([particle.best_position for particle in particles], axis=0) print(best_position) ```

这些方法各有优缺点,选择哪种方法取决于具体问题的性质、参数空间的大小以及计算资源。