关于自动编码器的问题
Questions about AutoEncoders
我有三个关于自动编码器的问题,非常感谢您的帮助:
1- 我注意到缺乏关于深度自动编码器 (AE) 的研究论文,尽管这个概念在大量教程和示例中都有解释,而且大多数教程都声称这个模型很强大,是缺少使用 AE 发表的研究论文,尤其是在异常或新颖性检测方面,这是有原因的吗?
2- 在所有教程中,我都看到过手动设置(硬设置)AutoEncoder 阈值作为异常检测的决策边界,方法是测试多个值并 select 选择最佳值,是还有另一种技术 select 阈值,换句话说,可以自动检测阈值的不同阈值机制是什么
关于您的第一个问题(减去异常检测部分),Keras 的创建者 François Chollet 在他的(强烈推荐)博客中给出了一些很好的提示 post Building Autoencoders in Keras:
What are autoencoders good for?
They are rarely used in practical applications. In 2012 they briefly found an application in greedy layer-wise pretraining for deep convolutional neural networks, but this quickly fell out of fashion as we started realizing that better random weight initialization schemes were sufficient for training deep networks from scratch. In 2014, batch normalization started allowing for even deeper networks, and from late 2015 we could train arbitrarily deep networks from scratch using residual learning.
[...]
So what's the big deal with autoencoders?
Their main claim to fame comes from being featured in many introductory machine learning classes available online. As a result, a lot of newcomers to the field absolutely love autoencoders and can't get enough of them. This is the reason why this tutorial exists!
更新
也就是说,似乎确实存在一些在实践中使用自动编码器进行异常检测的情况;这是最近的一些论文:
Clustering and Unsupervised Anomaly Detection with L2 Normalized Deep Auto-Encoder Representations
Unsupervised Anomaly Detection via Variational Auto-Encoder for Seasonal KPIs in Web Applications
Anomaly Detection with Robust Deep Auto-encoders
和博客 posts:
Credit Card Fraud Detection using Autoencoders in Keras
H2O - Autoencoders and anomaly detection (Python)
How Deep Learning Analytics Can Keep Your Data and Decisions in Line
我有三个关于自动编码器的问题,非常感谢您的帮助:
1- 我注意到缺乏关于深度自动编码器 (AE) 的研究论文,尽管这个概念在大量教程和示例中都有解释,而且大多数教程都声称这个模型很强大,是缺少使用 AE 发表的研究论文,尤其是在异常或新颖性检测方面,这是有原因的吗?
2- 在所有教程中,我都看到过手动设置(硬设置)AutoEncoder 阈值作为异常检测的决策边界,方法是测试多个值并 select 选择最佳值,是还有另一种技术 select 阈值,换句话说,可以自动检测阈值的不同阈值机制是什么
关于您的第一个问题(减去异常检测部分),Keras 的创建者 François Chollet 在他的(强烈推荐)博客中给出了一些很好的提示 post Building Autoencoders in Keras:
What are autoencoders good for?
They are rarely used in practical applications. In 2012 they briefly found an application in greedy layer-wise pretraining for deep convolutional neural networks, but this quickly fell out of fashion as we started realizing that better random weight initialization schemes were sufficient for training deep networks from scratch. In 2014, batch normalization started allowing for even deeper networks, and from late 2015 we could train arbitrarily deep networks from scratch using residual learning.
[...]
So what's the big deal with autoencoders?
Their main claim to fame comes from being featured in many introductory machine learning classes available online. As a result, a lot of newcomers to the field absolutely love autoencoders and can't get enough of them. This is the reason why this tutorial exists!
更新
也就是说,似乎确实存在一些在实践中使用自动编码器进行异常检测的情况;这是最近的一些论文:
Clustering and Unsupervised Anomaly Detection with L2 Normalized Deep Auto-Encoder Representations
Unsupervised Anomaly Detection via Variational Auto-Encoder for Seasonal KPIs in Web Applications
Anomaly Detection with Robust Deep Auto-encoders
和博客 posts:
Credit Card Fraud Detection using Autoencoders in Keras
H2O - Autoencoders and anomaly detection (Python)
How Deep Learning Analytics Can Keep Your Data and Decisions in Line