Questions tagged [machine-learning]
How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?
3,389 questions
0 votes
0 answers
13 views
Verifying One-vs-All Precision and Recall calculations from a multi-class confusion matrix
I am studying multi-class classification metrics and want to confirm the correct way to compute them from a confusion matrix. A weather classifier labels days as Sunny, Rainy, Cloudy. The test results ...
-1 votes
0 answers
26 views
Notation Convention in Linear Models
Notation Convention in Linear Models: Why $\theta^\top x$ instead of $\theta x$? Question: I'm working with CMU 10-414 Lecture 2 and I'm curious about the notation convention used to represent the ...
2 votes
1 answer
103 views
Integration by parts from perceptron capacity calculation
I am working through Chapter 6 of the book Statistical Mechanics of Machine Learning by Engel and Van den Broeck. I am stuck on the following integral, going from line 6.13 to line 6.14 of the book. I ...
1 vote
1 answer
43 views
Does the union of two datasets form a mixture distribution? [closed]
I have two datasets: $A := \{X_i\}_{i=1}^{n_a}$ sampled from distribution $P_A$, and $B := \{X_j\}_{j=1}^{n_b}$ sampled from distribution $P_B$. Let $n = n_a + n_b$ be the total sample size, and ...
1 vote
0 answers
32 views
Why is X following $\mathcal{N}(\mu + \Lambda z, \Phi)$ in the Factor Analysis model?
I’m working through some notes on Factor Analysis and I noticed something that confused me. We have $ X = \mu + \Lambda z + \epsilon $ with $z \sim \mathcal{N}(0,I_s)$, $\epsilon \sim \mathcal{N}(0,\...
2 votes
2 answers
221 views
Machine learning: what is the proper name for derivative of a function against a matrix?
In machine learning, it is typical to see a so-called weight matrix. As a low-dimensional example, let this matrix be defined as, $$W = \begin{bmatrix} w_{11} & w_{12} \\\ w_{21} & w_{22} \end{...
0 votes
1 answer
43 views
Decision boundary in linear classification models
The decision boundary, $y = \mathbf{w}^T\mathbf{x} + b = 0$, is the decision boundary in linear classification models. When $\mathbf{x} \in \mathbb{R}^2$ and $\mathbf{w} \in \mathbb{R}^2$, then $y \in ...
3 votes
0 answers
42 views
Density function of reverse diffusion process
All the prcoesses involved are continuous Markov process. The reverse diffusion and forward diffusion traverse identical trajectories in reverse temporal order. In the Machine Learning paper Deep ...