Spark ML - LinearSVC
ml_linear_svc
Description
Perform classification using linear support vector machines (SVM). This binary classifier optimizes the Hinge Loss using the OWLQN optimizer. Only supports L2 regularization currently.
Usage
ml_linear_svc(
x,
formula = NULL,
fit_intercept = TRUE,
reg_param = 0,
max_iter = 100,
standardization = TRUE,
weight_col = NULL,
tol = 1e-06,
threshold = 0,
aggregation_depth = 2,
features_col = "features",
label_col = "label",
prediction_col = "prediction",
raw_prediction_col = "rawPrediction",
uid = random_string("linear_svc_"),
...
)Arguments
| Arguments | Description |
|---|---|
| x | A spark_connection, ml_pipeline, or a tbl_spark. |
| formula | Used when x is a tbl_spark. R formula as a character string or a formula. This is used to transform the input dataframe before fitting, see ft_r_formula for details. |
| fit_intercept | Boolean; should the model be fit with an intercept term? |
| reg_param | Regularization parameter (aka lambda) |
| max_iter | The maximum number of iterations to use. |
| standardization | Whether to standardize the training features before fitting the model. |
| weight_col | The name of the column to use as weights for the model fit. |
| tol | Param for the convergence tolerance for iterative algorithms. |
| threshold | in binary classification prediction, in range [0, 1]. |
| aggregation_depth | (Spark 2.1.0+) Suggested depth for treeAggregate (>= 2). |
| features_col | Features column name, as a length-one character vector. The column should be single vector column of numeric values. Usually this column is output by ft_r_formula. |
| label_col | Label column name. The column should be a numeric column. Usually this column is output by ft_r_formula. |
| prediction_col | Prediction column name. |
| raw_prediction_col | Raw prediction (a.k.a. confidence) column name. |
| uid | A character string used to uniquely identify the ML estimator. |
| … | Optional arguments; see Details. |
Value
The object returned depends on the class of x. If it is a spark_connection, the function returns a ml_estimator object. If it is a ml_pipeline, it will return a pipeline with the predictor appended to it. If a tbl_spark, it will return a tbl_spark with the predictions added to it.
See Also
Other ml algorithms: ml_aft_survival_regression(), ml_decision_tree_classifier(), ml_gbt_classifier(), ml_generalized_linear_regression(), ml_isotonic_regression(), ml_linear_regression(), ml_logistic_regression(), ml_multilayer_perceptron_classifier(), ml_naive_bayes(), ml_one_vs_rest(), ml_random_forest_classifier()
Examples
library(dplyr)
sc <- spark_connect(master = "local")
iris_tbl <- sdf_copy_to(sc, iris, name = "iris_tbl", overwrite = TRUE)
partitions <- iris_tbl %>%
filter(Species != "setosa") %>%
sdf_random_split(training = 0.7, test = 0.3, seed = 1111)
iris_training <- partitions$training
iris_test <- partitions$test
svc_model <- iris_training %>%
ml_linear_svc(Species ~ .)
pred <- ml_predict(svc_model, iris_test)
ml_binary_classification_evaluator(pred)