This answer has three parts: the first and second use nearest neighbors like voting, the third uses the Apriori algorithm for Association Rules Mining. All use linear vector space representation and categorization of the numerical variable(s).
The ideas of the first and third parts can be found in the document "Importance of variables investigation guide".
The second part uses Classify and it is a modification of the first.
Data
Let us get Titanic data and categorize the age variable.
titanicDataset = Flatten /@ Apply[List, ExampleData[{"MachineLearning", "Titanic"}, "Data"], {1}]; Dimensions[titanicDataset] (* {1309, 4} *) titanicVarNames = Flatten[List @@ ExampleData[{"MachineLearning", "Titanic"}, "VariableDescriptions"]] (* {"passenger class", "passenger age", "passenger sex", "passenger survival"} *) titanicDatasetCatAge = titanicDataset; ageQF = Piecewise[{{1, -\[Infinity] < #1 <= 5}, {2, 5 < #1 <= 14}, {3, 14 < #1 <= 21}, {4, 21 < #1 <= 28}, {5, 28 < #1 <= 35}, {6, 35 < #1 <= 50}, {7, 50 < #1 <= \[Infinity]}}, 0] &; titanicDatasetCatAge[[All, 2]] = Map[If[MissingQ[#], 0, ageQF[#]] &, titanicDatasetCatAge[[All, 2]]] /. {1 -> "1(0-6)", 2 -> "2(6-14)", 3 -> "3(15-21)", 4 -> "4(22-28)", 5 -> "5(29-35)", 6 -> "6(36-50)", 7 -> "7(50+)", 0 -> "0(missing)"};
This is how the data titanicDatasetCatAge looks like:

Class determination by tallying
The idea is
to make a linear vector space representation of data, and
for a qiven query list of attributes that do not make a full record, to find all records that have those attributes and use voting to determine the class for the query list.
Vector space representation
titanicDatasetCatAge = DeleteCases[titanicDatasetCatAge, {___, _Missing, ___}]; docs = Map[ToString, titanicDatasetCatAge, {-1}]; RandomSample[docs, 4] (* {{"2nd", "6(36-50)", "female", "survived"}, {"1st", "3(15-21)", "female", "survived"}, {"1st", "5(29-35)", "female", "survived"}, {"3rd", "1(0-6)", "male", "died"}} *) cTerms = Union[Flatten[docs]]; cTermToIndexRules = Thread[cTerms -> Range[Length[cTerms]]]; cMat = SparseArray[ Flatten@MapIndexed[ Thread[Thread[{#2[[1]], #[[All, 1]] /. cTermToIndexRules}] -> #[[All, 2]]] &, Tally /@ docs]];
Class label tally finding function
Clear[ClassLabelTally] ClassLabelTally[{cMat_SparseArray, termToIndexRules : {_Rule ..}}, classLabels : {_String ..}, query_: {_String ..}] := Block[{ds, nnFunc, nnInds, qvec, res}, qvec = SparseArray[Thread[(query /. termToIndexRules) -> 1], Dimensions[cMat][[2]]]; nnInds = Pick[Range[Dimensions[cMat][[1]]], # >= Total[qvec] & /@ Flatten[cMat.qvec]]; res = Transpose[{classLabels, Normal[Total[cMat[[nnInds, classLabels /. termToIndexRules]]]]}]; res[[All, 2]] = N[res[[All, 2]]/Total[res[[All, 2]]]]; res ];
Examples of use
ClassLabelTally[{cMat, cTermToIndexRules}, {"died", "survived"}, {"female"}] (* {{"died", 0.272532}, {"survived", 0.727468}} *) ClassLabelTally[{cMat, cTermToIndexRules}, {"died", "survived"}, {"male"}] (* {{"died", 0.809015}, {"survived", 0.190985}} *) ClassLabelTally[{cMat, cTermToIndexRules}, {"died", "survived"}, {"male", "3rd"}] (* {{"died", 0.84787}, {"survived", 0.15213}} *) ClassLabelTally[{cMat, cTermToIndexRules}, {"died", "survived"}, {"female", "2nd"}] (* {{"died", 0.113208}, {"survived", 0.886792}} *) ClassLabelTally[{cMat, cTermToIndexRules}, {"died", "survived"}, {"female", "3(15-21)"}] (* {{"died", 0.289855}, {"survived", 0.710145}} *) ClassLabelTally[{cMat, cTermToIndexRules}, {"died", "survived"}, {"female", "3(15-21)", "1st"}] (* {{"died", 0.}, {"survived", 1.}} *)
Modification with Classify
The approach above can be modified to use Classify.
First we make a classifier:
cf = Classify[ titanicDatasetCatAge[[All, {1, 2, 3}]] -> titanicDatasetCatAge[[All, -1]]]
We can define a function to run the classifier over instances containing the set of variable features:
Clear[VarFeaturesClassify] VarFeaturesClassify[{cf_, data_, n_}, {cMat_SparseArray, termToIndexRules : {_Rule ..}}, classLabels : {_String ..}, query_: {_String ..}] := Block[{ds, nnFunc, nnInds, qvec, res}, qvec = SparseArray[Thread[(query /. termToIndexRules) -> 1], Dimensions[cMat][[2]]]; nnInds = Pick[Range[Dimensions[cMat][[1]]], # >= Total[qvec] & /@ Flatten[cMat.qvec]]; res = cf[data[[#]], "TopProbabilities"] & /@ RandomSample[nnInds, If[IntegerQ[n], UpTo[n], All]]; Map[#[[1, 1]] -> Mean[#[[All, 2]]] &, GatherBy[Flatten[res], #[[1]] &]] ];
Here are examples of use:
VarFeaturesClassify[{cf, titanicDatasetCatAge[[All, {1, 2, 3}]], 20}, {cMat, cTermToIndexRules}, {"died", "survived"}, {"female", "2nd"}] (* {"survived" -> 0.828833, "died" -> 0.176815} *) VarFeaturesClassify[{cf, titanicDatasetCatAge[[All, {1, 2, 3}]], All}, {cMat, cTermToIndexRules}, {"died", "survived"}, {"female", "2nd"}] (* {"survived" -> 0.853315, "died" -> 0.152542} *) VarFeaturesClassify[{cf, titanicDatasetCatAge[[All, {1, 2, 3}]], 20}, {cMat, cTermToIndexRules}, {"died", "survived"}, {"female", "3(15-21)", "1st"}] (* {"survived" -> 0.984931} *)
Using Apriori
The code below closely follows the code in the document "Importance of variables investigation guide" pages 20-22.
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/AprioriAlgorithm.m"] \[Mu] = 0.02; Print["Number of records corresponding to \[Mu]=", \[Mu], ": ", Length[titanicDatasetCatAge]*\[Mu]] Print["Computation time:", AbsoluteTiming[ {aprioriRes, itemToIDRules, idToItemRules} = AprioriApplication[titanicDatasetCatAge, \[Mu]]; ][[1]]]; Grid[Prepend[ Tally[Map[Length, Join @@ aprioriRes]], {"frequent set\nlength", "number of\nfrequent sets"}], Dividers -> {None, {True, True}}]

items = {"survived"}; itemRules = ItemRules[titanicDatasetCatAge, aprioriRes, itemToIDRules, idToItemRules, items, 0.7, 0.02];
The following command tabulates those of the rules that have "survived" as a consequent. The rules are sorted according to their confidence.
Magnify[#, 0.7] &@Grid[ Prepend[ SortBy[Select[Join @@ itemRules, MemberQ[items, #[[-1, 1]]] && #[[2]] > 0.7 && 2 <= Length[#[[-2]]] <= 10 &], -#[[2]] &], Map[Style[#, Blue, FontFamily -> "Times"] &, {"Support", "Confidence", "Lift", "Leverage", "Conviction", "Antecedent", "Consequent"}] ], Alignment -> Left]

Classifyor any other classifier would be fine? $\endgroup$Classifyif possible, but I'm open to suggestions. $\endgroup$Classify, it might be easier to build several ones, one for the full dataset, the others for the partial records versions. (If the number of variables is small enough.) $\endgroup$Classify, you suggest attempting to build additional classifiers for those complex strings? Concerning the number of variables, that remains, part of my dilemma. I'm aiming to use 10 variables total, 2 of them being complex strings as I described above. I estimate the number of variables within those strings to be less than 10 on average. $\endgroup$