分类: AI for Trading

  • 主动投资管理扫盲——什么是IC, IR, 夏普率…

    初涉主动投资管理(以及量化)的小同学可能会被一些缩写搞烦。下面是其中几个入门级的指标,他们常被用来评价某个策略的绩效(performance)。

    夏普率 Sharpe Ratio

    夏普比率(SR)将投资组合的预期超额投资组合与该超额收益的波动率进行比较,该波动率以其标准偏差衡量。 它以每单位风险的平均超额收益来衡量补偿:
    SR = (R_a - R_f) / σ_a

    其中:

    • R_a: 投资组合的收益率,这本来是一个期望值(expectation),但是通常会用投资组合至今N期的平均收益率作为观测值估计它;
    • R_f: 无风险利率
    • σ_a: 投资组合收益的标准差,也是一个期望值,这边通常用至今N期投资组合收益率的标准差来估计它;

    信息系数 Information Coefficient(IC)

    IC会测量Alpha因子与信号产生的前瞻收益之间的相关性,并捕获经理预测技能的准确性。投资组合的截面IC通常是指模型预测的股票下期收益率(i.e.某alpha因子在这一时刻T给所有股票的因子值)与股票下期(T+x)实际的收益率的相关性。假设有3000只股票,那相关性就是两个3000维向量的相关系数,多半用皮尔森相关系数计算。

    信息比率 Information Ratio(IR)

    基于IC的一个统计量:IR = mean(IC) / std(IC) 均值除以方差。较高的信息比率(IR)意味着相对于所承担的单位风险下拥有更高的信息系数。

  • 因子 收益率 因子暴露 因子载荷 什么东东?

    作为非金融科班的学生,发现有一些术语其实看上去不知所以然,但是理解了之后就会感叹一下:

    这tm不就是xxx么~

    记得刚开始搞量化的时候,经常碰到的词汇就是各种因子,以及因子暴露(factor exposure)因子载荷(factor loading)。然后我就去百度知乎Google搜啊,立马出来一大堆长篇大论告诉你它的由来,各种多因子模型。窃以为对于刚入门的我,最想知道的就是这个东西是什么意思,然后才是它背后的模型。

    那么问题来了

    因子暴露(aka. 因子载荷)是什么?因子暴露就是因子载荷。假设你的投资组合里有p只股票,并且你有m个因子,那么想象因子暴露是一个p * m的矩阵——每只股票在每个因子上的因子值。打个比方,如果你的因子是32个行业分类,那么这个矩阵的元素非0即1——1代表股票属于这个行业. 如果假定一只股票只能属于一个行业,那么这个矩阵的每一行都只可能有一个1,其余都是0.
    这个东西一般是归一化或者说标准化过的值,不然会无法做统计分析。

    那么,这东西能拿来做什么?

    结构化风险模型的假设是,投资组合的收益率是可以由下列公式解释

    r = X * b + u

    如果依旧假设有p只股票,m个因子,投资窗口时间长是l
    其中

    • r是投资组合收益,p*l矩阵
    • X是因子暴露,p*m矩阵
    • b是因子收益率,m*l矩阵
    • u是特异收益(又叫残差收益),相当于前面X*b不能解释的一部分,p*l矩阵
      简而言之就是一个投资组合的收益可以被几个因子来解释,解释不了的部分另归另说。

    如果假定个股残差收益是互不相关的,那么

    1. 比如我假定股票的收益(r)是由不同的行业因子决定的,那么只要有不同行业ETF的收益率(上面的b),我就可以拿来做回归分析得知股票对应的行业(上面的X);
    2. 再如假定股票的收益(r)和股票的Barra风格因子值(X)是已知的,那么也可以通过回归分析的方法拿到每个风格因子的收益率(b);

    题外话:感觉这个有点像期权定价里的implied volatility和realized volatility。如果是从underlying的价格用Black-Scholes公式算出来的叫隐含波动率,而已实现波动率可以通过观测期权价得到。上面通过ETF收益率算出来的因子暴露有点类似,其实股票的行业归属也可以从基本面的数据里拿到……

  • 滑点(Slippage)与交易冲击

    解释滑点之前,先看限价订单簿(limit orderbook):
    tcost-1.png

    图中买一(top bid)和卖一(top ask)的价差成为点差(spread, 也有叫盘口),买卖价的中间点称为中点价格(mid-point price)。简单情况下,我们会用中点价作为交易成本的一个基准因素。
    tcost-3.png

    对于每个交易的子订单,最终的成交价与该基准之间的差是衡量交易成本的度量,称为“滑点(slippage)”。关于滑点,打个比方:假设我打算买入500手茅台,此时的买一是999.8, 卖一是1000.2,中间价是1000.0。我可能会挂一个市价买单(bid market order),市价单将会以对手方的最优价(best offer price)成交。单挂出去后,反馈给我2笔成交记录:300@1000.2, 200@1000.4;第一次成交盘口没变,第二次成交的时候,盘口变成了1000.2(bid1), 1000.4(ask1)。这样的话,两次的滑点分别是0.2, 0.4(记住是以挂单时的中间价算)。

    tocst-4.png

    我们可以将滑点视为两种价格影响的总和——暂时的价格影响+永久的价格影响

    暂时的价格冲击可以表示为中间价与买/卖一价的价差,上面的例子中是0.2。永久的价格冲击可以表示为成交后到成交前的中间价变化,你(当然还有其它市场参与者)的交易影响了这只股票的盘口————上面的例子里,第二次成交的时候中间价从1000.0变成了1000.3。

    换言之,这次成交的滑点如下:

    x 暂时影响 永久影响 滑点
    第一次 1000.2-1000.0 0 0.2+0
    第二次 1000.4-1000.3 1000.3 – 1000.0 0.1+0.3
  • Backtest Best Practices 股票回测的最佳实践

    Use cross-validation to achieve just the right amount of model complexity.
    使用交叉验证来实现适当数量的模型复杂性。

    Always keep an out-of-sample test dataset. You should only look at the results of a test on this dataset once all model decisions have been made. If you let the results of this test influence decisions made about the model, you no longer have an estimate of generalization error.
    始终保持样本外测试数据集。做出所有模型决策后,您只应查看此数据集上的测试结果。如果让此测试的结果影响有关该模型的决策,则您将不再具有泛化误差的估计值。

    Be wary of creating multiple model configurations. If the Sharpe ratio of a backtest is 2, but there are 10 model configurations, this is a kind of multiple comparison bias. This is different than repeatedly tweaking the parameters to get a sharpe ratio of 2.
    小心创建多个模型配置。如果回测的夏普比率为2,但是有10个模型配置,则这是一种多重比较偏差。这与反复调整参数以得到夏普率=2不同。

    Be careful about your choice of time period for validation and testing. Be sure that the test period is not special in any way.
    请谨慎选择验证和测试的时间段。确保测试期不以任何方式特殊。

    Be careful about how often you touch the data. You should only use the test data once, when your validation process is finished and your model is fully built. Too many tweaks in response to tests on validation data are likely to cause the model to increasingly fit the validation data.
    请注意您多久接触一次数据。验证过程完成且模型完全构建后,您仅应使用一次测试数据。对验证数据的测试做出的太多调整很可能导致模型越来越适合验证数据。

    Keep track of the dates on which modifications to the model were made, so that you know the date on which a provable out-of-sample period commenced. If a model hasn’t changed for 3 years, then the performance on the past 3 years is a measure of out-of-sample performance.
    跟踪对模型进行修改的日期,以便您知道可证明的超出样本期限的开始日期。如果模型已经3年没有变化,那么过去3年的性能就是衡量样本外性能的指标。

    Traditional ML is about fitting a model until it works. Finance is different—you can’t keep adjusting parameters to get a desired result. Maximizing the in-sample sharpe ratio is not good—it would probably make out of sample sharpe ratio worse. It’s very important to follow good research practices.
    传统的ML就是在模型起作用之前对其进行拟合。金融数据的ML有所不同-您无法不断调整参数以获得理想的结果。最大化样本内夏普比率不是很好-可能会使样本外夏普比率变得更糟。遵循良好的研究规范非常重要。

    How does one split data into training, validation, and test sets so as to avoid bias induced by structural changes? It’s not always better to use the most recent time period as the test set, sometimes it’s better to have a random sample of years in the middle of your dataset. You want there to be nothing SPECIAL about the hold-out set. If the test set was the quant meltdown or financial crisis—those would be special validation sets. If you test on those time periods, you would be left with the unanswerable question: was it just bad luck? There is still some value in a strategy that would work every year except during a financial crisis.
    如何将数据分为训练,验证和测试集,以避免结构变化引起的偏差?使用最近的时间段作为测试集并不总是更好,有时最好在数据集中使用几年的随机样本。您希望保留集(hold-out set)没有特别之处。如果测试集是量化崩溃或金融危机,那将是特殊的验证集。如果在这些时间段进行测试,那么您将面临一个无法回答的问题:这只是运气不好吗?除了在金融危机期间,每年都可以使用的策略还有一些价值。

    Alphas tend to decay over time, so one can argue that using the past 3 or 4 years as a hold out set is a tough test set. Lots of things work less and less over time because knowledge spreads and new data are disseminated. Broader dissemination of data causes alpha decay. A strategy that performed well when tested on a hold-out set of the past few years would be slightly more impressive than one tested on a less recent time period.
    Alpha会随着时间的流逝而衰减,因此可以说使用过去3或4年作为保留时间集是一个艰难的测试集。随着知识的传播和新数据的传播,随着时间的流逝,许多事情越来越少。数据的广泛传播会导致alpha衰减。在过去几年的保留测试中,一项性能良好的策略比在最近一段时间进行测试的策略更具吸引力。

    Source/来源: AI for Trading, Udacity

  • Note: Overlapping labels 重叠的标签, AI for Trading

    问题

    重叠标签(overlapping labels)问题是使用金融数据训练预测模型遇到的一个问题。如下图所示,假设我们要训练一个模型预测未来一周的收益,最简单的情况下我们会用某一天T的后一周连续收益作为训练的标签(label, i.e. 那个y)。这样每天的样本例子都有一个未来一周的label对应。但由于金融数据有自相关性,连续几天的label通常是相互关联的——这就和大多数机器学习模型的假设冲突,因为这些模型通常假设我们输入的每个样本间是独立同分布的(independent and identically distributed, IID)。
    example-overlapping-labels.png

    以随机森林(Random Forest)模型为例,如果按照上述样本进行训练,那么一个bag里面的样本很容易互相关联,out-bag的样本也亦如此,于是生成的各个决策树就比较相似,最终导致生成的森林的error rate上升——他们太相似了。

    解决方案1:sum-sampling 子采样

    example-subsample.png
    如上图,若要训练的目标是未来一周的收益,可以子采样每周五的未来一周收益。这种方法的缺陷很明显,就是少了很多训练数据。设想一下如果预测目标是未来一月或是一年的收益,训练数据就被删的所剩无几了。

    解决方案2:调整随机森林的bagging过程

    减少每次bag的样本数量,这样一个bag里的样本相关性就会降低。

    解决方案3:轮动数据

    rf-subsample-and-ensemble.png
    基于方案1,假设我们还是要未来一周收益,那么可以训练5个不同的模型,分别子采样周一、周二、…、周五的数据,最后合并这五个森林。

    (更多…)

  • 笔记 – Tree-based models with financial data, AI for Trading

    Importance of Random Column Selection / 随机列选择的重要性

    Sometimes one feature will dominate in finance. If you don’t apply some type of random feature selection, then your trees will not be that different (i.e., will be correlated) and that reduces the benefit of ensembling.
    有时,一项特征将在财务数据中占主导地位。 如果您不应用某种类型的随机特征选择,那么您的树将不会有太大的不同(即, 他们之间的相关性太高),从而降低了集成(ensembling)的好处。

    What features are typically dominant? Classical, price-driven factors, like mean reversion or momentum factors, often dominate. You may also see that features that define industry sectors or market “regimes” (periods defined, for example, by high or low market volatility or other market-wide trends) are towards the root of the tree.
    典型地,价格驱动的因子(例如均值回归或动量因子)通常占主导地位。 您还可能会看到,定义行业部门或市场“制度”的特征(例如,由高或低的市场波动性或其他市场趋势确定的时期)都会靠近树的根部。

    Choosing Hyperparameter Values / 选择超参数

    In non-financial and non-time series machine learning, setting this hyperparameter is fairly straightforward: you use grid search cross-validation to find the value that maximizes the model’s performance on validation data. When you have time-series data, you typically don’t use cross-validation because usually you just want a single validation dataset that is as close in time as possible to the present. If you have a problem with high signal-to-noise, then you can try a bit of parameter tuning on the single validation set. In finance, though, you have time series data and you have low signal-to-noise. Therefore, you have one validation set and if you were to try a bunch of parameter values on this validation set, you would almost surely be overfitting. As such, you need to set the parameter with some judgement and minimal trials.
    在非金融和非时间序列的机器学习中,设置超参数非常简单:您可以使用网格搜索交叉验证来找到使模型在验证数据上的性能最大化的值。当您拥有时间序列数据时,通常不使用交叉验证,因为通常您希望验证集对应的时间越晚越好(译注: 理由可以见这里)。如果信噪比过高,可以在单个验证集中尝试一些参数调整。但是,在金融领域,您有时间序列数据,信噪比也很低。鉴于只有一个验证集,如果要在此验证集上尝试一堆参数值,则几乎肯定会过拟合。因此,您需要通过一些判断和最少的尝试来设置参数。

    Random Forests for Alpha Combination / 用于Alpha组合的随机森林

    rf-for-alpha-combination.png

    For this type of problem, we have data that look like the above. Each row is indexed by both date and asset. We typically have several alpha factors, and we then calculate “features”, which provide the random forest model additional information. For example, we may calculate date features, which the algorithm could use to learn that certain factors are particularly predictive during certain periods.
    对于这种类型的问题,我们有类似上面的数据。 每行都按日期和资产编制索引。 通常,我们有几个alpha因子,然后我们计算“特征”,这些特征为随机森林模型提供了其他信息。 例如,我们可以计算日期特征,该算法可用于了解某些因素在某些时期特别具有预测性。
    example-finance-tree.png
    What are we trying to predict? We’re trying to predict asset returns—but not their decimal values! We rank them relative to each other into only two buckets, such that we essentially predict winners and losers on the day. T
    我们要预测什么? 我们正在尝试预测资产收益,但不能预测其十进制值! 我们将它们彼此相对地分为两个等级,这样我们就可以基本上预测当天的赢家和输家。


    Source: AI for Trading, Udacity

  • Validation for Financial Data 金融数据的验证集

    Furthermore, when working with financial data, we can bring practitioners’ knowledge of markets and financial data to bear on our validation procedures. We know that since markets are competitive, factors decay over time; signals that may have worked well in the past may no longer work well by the current time. For this reason, we should generally test and validate on the most recent data possible, as testing on the recent past could be considered the most demanding test.
    此外,在处理财务数据时,我们可以使从业人员对市场和财务数据的了解可用于我们的验证程序。我们知道,由于市场竞争激烈,因此因素会随着时间而衰减。过去可能效果良好的信号可能在当前时间不再有效。因此,我们通常应该对最新数据进行测试和验证,因为对最近历史的测试可能被认为是最苛刻的测试。

    It’s possible that the design of the model may cause it to perform better or worse in different market regimes; so the most recent time period may not be in a market regime in which the model would perform well. But generally, we still prefer to use most recent data to test if the model would work in the time most similar to the present. In practice, of course, before investing a lot of money in a strategy, we would allow time to elapse without changing the model, and test its performance with this true out-of-sample data: what’s known as “paper trading”.
    模型的设计可能会导致它在不同的市场体制下表现更好或更差。因此,最近的时间段可能不在该模型可以正常运行的市场体制中。但总的来说,我们仍然倾向于使用最新数据来测试该模型在与当前时间最相似的时间内是否可以正常工作。当然,实际上,在实践中,在为策略投入大量资金之前,我们会花些时间而不更改模型,并使用此真实的样本外数据(即所谓的“纸面交易”)测试其性能。

    In summary, most common practice is to keep a block of data from the most recent time period as your test set.
    总之,最常见的做法是将最近一段时间内的数据作为测试集

    Then, the data are split into train, valid and test sets according to the following schematic:
    然后,根据下图将数据分为训练集,验证集和测试集:
    train-valid-test-time-2.png

    When working with data that are indexed by asset and day, it’s important not to split data for the same day, but for different assets, among sets. This would manifest as a subtle form of lookahead bias. For example, say data from Coca-Cola and Pepsi for the same day ended up in different sets. Since they are very similar companies, one might expect their share price trends to be correlated. If the model were trained on data from one company, and then validated on data from the other company, it might “learn” about a price movement that affects both companies, and therefore have artificially inflated performance on the validation set.
    当使用按资产和日期索引的数据时,重要的是不要在同一天中将同一种资产的数据分到一组,而是将不同资产的数据分到一组内。 这将表现为超前偏差的微妙形式(译注:某种程度上像是利用了未来数据)。 例如,说来自可口可乐和百事可乐的同一天的数据以不同的集合结束。 由于它们是非常相似的公司,因此人们可能希望它们的股价趋势相互关联。 如果模型是根据一个公司的数据进行训练的,然后根据另一公司的数据进行验证的,则它可能会“了解”会影响两家公司的价格变动,因此会人为地夸大验证集上的绩效。


    Source/来源: AI for Trading, Udacity

  • Cross Validation for Time Series 时间序列数据的交叉验证

    Methods for choosing training, testing and validation sets for time-series data work a little differently than the methods described so far. The main reasons we cannot use the previously described methods exactly as described are,
    选择时间序列数据的训练,测试和验证集的方法与到目前为止描述的方法稍有不同。我们无法完全按照上述说明使用前述方法的主要原因是:

    We want our validation and testing procedure to mimic the way our model would work in production. In production, it’s impossible to train on data from the future. Accordingly, training on data that occurred later in time than the validation or test data is problematic.
    我们希望我们的验证和测试过程能够模仿我们的模型在生产中的工作方式。在生产中,不可能训练未来的数据。因此,对在时间上晚于验证或测试数据的数据进行训练是有问题的。
    Time series data can have the property that data from later times are dependent on data from earlier times. Therefore, leaving out an observation does not remove all the associated information due to correlations with other observations.
    时间序列数据可以具有以下特性:来自较晚时间的数据取决于来自较早时间的数据(译注:自相关性?)。因此,由于与其他观察值的相关性,省略观察值不会删除所有关联的信息。
    How do we modify cross validation procedures to treat time-series data? A common method is to divide the data in the following manner:
    我们如何修改交叉验证程序以处理时间序列数据?一种常见的方法是按以下方式划分数据:
    time-series-validation-2.png

    This way, each training set consists only of observations that occurred prior to the observations that form the validation set. Likewise, both the training and validation sets consist only of observations that occurred prior to the observations that form the test set. Thus, no future observations can be used in constructing the forecast.
    这样,每个训练集仅包含在形成验证集的观察之前发生的观察。同样,训练集和验证集都仅包含在形成测试集的观察之前发生的观察。因此,未来的观察不能用于构建预测。


    Source/来源: AI for Trading, UdaCity