How Affinity Propagation Clustering Works
Affinity Propagation exchanges messages between data points to identify exemplars — representative cluster centers — without needing K.
About the Wine Quality Dataset
178 wine samples with 13 chemical properties. Ideal for discovering natural groupings or predicting wine class.
- Samples
- 178
- Features
- 13
- Type
- Numeric
- Category
- Probabilistic
Key Metrics to Watch
Silhouette Score
Measures how similar a point is to its own cluster vs. other clusters. Ranges from −1 to +1; higher is better.
Calinski-Harabasz Index
Ratio of between-cluster to within-cluster variance. Higher values indicate denser, well-separated clusters.
Davies-Bouldin Index
Average similarity between each cluster and its most similar cluster. Lower is better.
Inertia (Within-Cluster SSE)
Sum of squared distances from each point to its assigned centroid. Lower indicates tighter clusters.
When to Use Affinity Propagation Clustering
Affinity Propagation Clustering belongs to the Probabilistic family of clustering algorithms. These methods model each cluster as a probability distribution, providing soft assignments and uncertainty estimates.
Related Examples
K-Means Clustering on Wine Quality
See K-Means Clustering applied to the Wine Quality dataset (178 samples, 13 features). Interactive visualization, metrics, and analysis.
K-Medoids Clustering on Wine Quality
See K-Medoids Clustering applied to the Wine Quality dataset (178 samples, 13 features). Interactive visualization, metrics, and analysis.
DBSCAN Clustering on Wine Quality
See DBSCAN Clustering applied to the Wine Quality dataset (178 samples, 13 features). Interactive visualization, metrics, and analysis.
HDBSCAN Clustering on Wine Quality
See HDBSCAN Clustering applied to the Wine Quality dataset (178 samples, 13 features). Interactive visualization, metrics, and analysis.