rpart 不在 R 中创建决策树,支持向量机工作
rpart not creating Decision Tree in R, SVM works
我正在尝试创建用于分类的决策树,但未创建。使用 SVM(训练 == 测试数据)以 0.85 的精度执行相同的数据,"play" 是目标...
知道我做错了什么吗?这是数据和代码:
https://gist.github.com/romeokienzler/c471819cbf156a69f73daf49f8c700c6
outlook,temp,humidity,windy,play
sunny,hot,high,false,no
sunny,hot,high,true,no
overcast,hot,high,false,yes
rainy,mild,high,false,yes
rainy,cool,normal,false,yes
rainy,cool,normal,true,no
overcast,cool,normal,true,yes
sunny,mild,high,false,no
sunny,cool,normal,false,yes
rainy,mild,normal,false,yes
sunny,mild,normal,true,yes
overcast,mild,high,true,yes
overcast,hot,normal,false,yes
rainy,mild,high,true,no
为了使用 SVM,我对数据进行了编码:
https://gist.github.com/romeokienzler/9bfce4182eda3d7662315621462c9cc6
outlook,temp,humidity,windy,play
1,1,2,FALSE,FALSE
1,1,2,TRUE,FALSE
2,1,2,FALSE,TRUE
3,2,2,FALSE,TRUE
3,3,1,FALSE,TRUE
3,3,1,TRUE,FALSE
2,3,1,TRUE,TRUE
1,2,2,FALSE,FALSE
1,3,1,FALSE,TRUE
3,2,1,FALSE,TRUE
1,2,1,TRUE,TRUE
2,2,2,TRUE,TRUE
2,1,1,FALSE,TRUE
3,2,2,TRUE,FALSE
这是 SVM 案例:
library(e1071)
df= read.csv("5.tennis_encoded.csv")
attach(df)
x <- subset(df, select=-play)
y <- play
detach(df)
model = svm(x,y,type = "C")
pred = predict(model,x)
truthVector = pred == y
good = length(truthVector[truthVector==TRUE])
bad = length(truthVector[truthVector==FALSE])
good/(good+bad)
[1] 0.8571429
这是决策树
df= read.csv("5.tennis_encoded.csv")
library(rpart)
model = rpart(play ~ .,method = "class", data=df)
print(model)
1) 根 14 5 真 (0.3571429 0.6428571) *
所以我基本上得到了一棵只有根的树,游戏概率为 0.64% == 是
知道我做错了什么吗?
很可能您传递给算法的数据太少无法拆分。
查看 rpart.control 函数了解更多详情
rpart.control(minsplit = 20, minbucket = round(minsplit/3), cp = 0.01,
maxcompete = 4, maxsurrogate = 5, usesurrogate = 2, xval = 10,
surrogatestyle = 0, maxdepth = 30, ...)
如您所见,最小拆分大小为 20。
如果你
model = rpart(play ~ .,method = "class", data=df, control= rpart.control(minsplit=2))
你应该得到更多的分裂
我正在尝试创建用于分类的决策树,但未创建。使用 SVM(训练 == 测试数据)以 0.85 的精度执行相同的数据,"play" 是目标...
知道我做错了什么吗?这是数据和代码: https://gist.github.com/romeokienzler/c471819cbf156a69f73daf49f8c700c6
outlook,temp,humidity,windy,play
sunny,hot,high,false,no
sunny,hot,high,true,no
overcast,hot,high,false,yes
rainy,mild,high,false,yes
rainy,cool,normal,false,yes
rainy,cool,normal,true,no
overcast,cool,normal,true,yes
sunny,mild,high,false,no
sunny,cool,normal,false,yes
rainy,mild,normal,false,yes
sunny,mild,normal,true,yes
overcast,mild,high,true,yes
overcast,hot,normal,false,yes
rainy,mild,high,true,no
为了使用 SVM,我对数据进行了编码: https://gist.github.com/romeokienzler/9bfce4182eda3d7662315621462c9cc6
outlook,temp,humidity,windy,play
1,1,2,FALSE,FALSE
1,1,2,TRUE,FALSE
2,1,2,FALSE,TRUE
3,2,2,FALSE,TRUE
3,3,1,FALSE,TRUE
3,3,1,TRUE,FALSE
2,3,1,TRUE,TRUE
1,2,2,FALSE,FALSE
1,3,1,FALSE,TRUE
3,2,1,FALSE,TRUE
1,2,1,TRUE,TRUE
2,2,2,TRUE,TRUE
2,1,1,FALSE,TRUE
3,2,2,TRUE,FALSE
这是 SVM 案例:
library(e1071)
df= read.csv("5.tennis_encoded.csv")
attach(df)
x <- subset(df, select=-play)
y <- play
detach(df)
model = svm(x,y,type = "C")
pred = predict(model,x)
truthVector = pred == y
good = length(truthVector[truthVector==TRUE])
bad = length(truthVector[truthVector==FALSE])
good/(good+bad)
[1] 0.8571429
这是决策树
df= read.csv("5.tennis_encoded.csv")
library(rpart)
model = rpart(play ~ .,method = "class", data=df)
print(model)
1) 根 14 5 真 (0.3571429 0.6428571) *
所以我基本上得到了一棵只有根的树,游戏概率为 0.64% == 是
知道我做错了什么吗?
很可能您传递给算法的数据太少无法拆分。
查看 rpart.control 函数了解更多详情
rpart.control(minsplit = 20, minbucket = round(minsplit/3), cp = 0.01,
maxcompete = 4, maxsurrogate = 5, usesurrogate = 2, xval = 10,
surrogatestyle = 0, maxdepth = 30, ...)
如您所见,最小拆分大小为 20。
如果你
model = rpart(play ~ .,method = "class", data=df, control= rpart.control(minsplit=2))
你应该得到更多的分裂