Compare commits
8 Commits
f1a8eefdc3
...
feature/re
| Author | SHA1 | Date | |
|---|---|---|---|
| 91fcd931e8 | |||
| 9ccb149dda | |||
| 972474750f | |||
| 2f69ff4ecf | |||
| db7160b094 | |||
| f6d493eb4e | |||
| 2545659f12 | |||
| 334332bc78 |
7
.claude/settings.json
Normal file
7
.claude/settings.json
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"permissions": {
|
||||||
|
"allow": [
|
||||||
|
"Bash(flutter analyze:*)"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -3,7 +3,14 @@
|
|||||||
"allow": [
|
"allow": [
|
||||||
"Bash(flutter clean:*)",
|
"Bash(flutter clean:*)",
|
||||||
"Bash(flutter pub get:*)",
|
"Bash(flutter pub get:*)",
|
||||||
"Bash(flutter run:*)"
|
"Bash(flutter run:*)",
|
||||||
|
"Bash(cmake:*)",
|
||||||
|
"Bash(where:*)",
|
||||||
|
"Bash(winget search:*)",
|
||||||
|
"Bash(winget install:*)",
|
||||||
|
"Bash(\"/c/Program Files \\(x86\\)/Microsoft Visual Studio/Installer/vs_installer.exe\" modify --installPath \"C:\\\\Program Files \\(x86\\)\\\\Microsoft Visual Studio\\\\2022\\\\BuildTools\" --add Microsoft.VisualStudio.Workload.VCTools --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64 --add Microsoft.VisualStudio.Component.Windows11SDK.22621 --passive --wait)",
|
||||||
|
"Bash(cmd //c \"\"\"C:\\\\Program Files\\\\Microsoft Visual Studio\\\\18\\\\Community\\\\Common7\\\\Tools\\\\VsDevCmd.bat\"\" && flutter run -d windows\")",
|
||||||
|
"Bash(flutter doctor:*)"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
25
CHANGELOG.md
Normal file
25
CHANGELOG.md
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
# Changelog
|
||||||
|
|
||||||
|
## [v0.0.1] - 2026-01-29
|
||||||
|
|
||||||
|
### Ajouté
|
||||||
|
- **Interface d'Analyse** :
|
||||||
|
- Implémentation d'un bouton de sauvegarde "Morphing" : le bouton flottant se transforme en un bouton large en bas de page lors du défilement.
|
||||||
|
- Ajout de la gestion du défilement et de l'espacement pour une meilleure ergonomie.
|
||||||
|
- Visualisation des impacts et statistiques de groupement.
|
||||||
|
- **Support Desktop (Windows)** :
|
||||||
|
- Configuration de la base de données SQLite pour fonctionner sur Windows via `sqflite_common_ffi`.
|
||||||
|
- Initialisation conditionnelle selon la plateforme.
|
||||||
|
|
||||||
|
### Corrigé
|
||||||
|
- **Crash Windows** : Résolution du plantage dû à l'initialisation manquante de la factory de base de données FFI.
|
||||||
|
- **Dépendances** : Fixation de la version de `sqflite_common_ffi` à `2.3.3` pour contourner un problème de cache/corruption avec la version `2.4.0+2`.
|
||||||
|
- **UI/UX** :
|
||||||
|
- Correction des débordements de texte ("zebra stripes") dans le bouton de sauvegarde lors de l'animation grâce à `FittedBox`.
|
||||||
|
- Optimisation de l'affichage du titre "Groupement" dans les statistiques pour éviter les dépassements sur petits écrans.
|
||||||
|
- Nettoyage des appels redondants (`super.initState`) et correction de la structure des widgets (`Stack` mal fermé).
|
||||||
|
|
||||||
|
### Historique des Commits
|
||||||
|
- `db7160b` - +désactivation (2026-01-29)
|
||||||
|
- `f1a8eef` - ajout correctif (2026-01-28)
|
||||||
|
- `031d4a4` - premier app version beta (2026-01-18)
|
||||||
40
README.md
40
README.md
@@ -1,17 +1,35 @@
|
|||||||
# bully
|
# Bully - Analyseur de Cible
|
||||||
|
|
||||||
A new Flutter project.
|
Application Flutter multiplateforme pour l'analyse et le suivi de vos séances de tir.
|
||||||
|
|
||||||
## Getting Started
|
## Fonctionnalités Principales
|
||||||
|
|
||||||
This project is a starting point for a Flutter application.
|
* **Capture et Analyse** : Prenez une photo de votre cible et analysez vos impacts.
|
||||||
|
* **Détection Automatique** : Utilise des algorithmes pour détecter automatiquement les impacts de balle sur la cible.
|
||||||
|
* **Calibration** : Outils de calibration précis pour définir la taille et le centre de la cible, assurant des mesures exactes.
|
||||||
|
* **Statistiques Détaillées** :
|
||||||
|
* Calcul du score total.
|
||||||
|
* Analyse du groupement (H+L, diamètre moyen).
|
||||||
|
* Visualisation graphique de la dispersion.
|
||||||
|
* **Historique** : Sauvegardez vos sessions avec des notes et consultez votre progression au fil du temps.
|
||||||
|
* **Interface Intuitive** : Design moderne et fluide, avec un bouton de sauvegarde dynamique qui s'adapte à votre navigation.
|
||||||
|
|
||||||
A few resources to get you started if this is your first Flutter project:
|
## Détails Techniques
|
||||||
|
|
||||||
- [Learn Flutter](https://docs.flutter.dev/get-started/learn-flutter)
|
* **Framework** : Flutter (Compatible Android, iOS, Windows, Linux, macOS).
|
||||||
- [Write your first Flutter app](https://docs.flutter.dev/get-started/codelab)
|
* **Base de Données** : SQLite (via `sqflite` et `sqflite_common_ffi` pour le support Desktop).
|
||||||
- [Flutter learning resources](https://docs.flutter.dev/reference/learning-resources)
|
* **Graphiques** : `fl_chart` pour la visualisation des données.
|
||||||
|
* **Architecture** : Provider pour la gestion d'état.
|
||||||
|
|
||||||
For help getting started with Flutter development, view the
|
## Installation
|
||||||
[online documentation](https://docs.flutter.dev/), which offers tutorials,
|
|
||||||
samples, guidance on mobile development, and a full API reference.
|
1. Assurez-vous d'avoir Flutter installé.
|
||||||
|
2. Clonez le dépôt.
|
||||||
|
3. Installez les dépendances :
|
||||||
|
```bash
|
||||||
|
flutter pub get
|
||||||
|
```
|
||||||
|
4. Lancez l'application :
|
||||||
|
```bash
|
||||||
|
flutter run
|
||||||
|
```
|
||||||
|
|||||||
3
devtools_options.yaml
Normal file
3
devtools_options.yaml
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
description: This file stores settings for Dart & Flutter DevTools.
|
||||||
|
documentation: https://docs.flutter.dev/tools/devtools/extensions#configure-extension-enablement-states
|
||||||
|
extensions:
|
||||||
@@ -34,11 +34,11 @@ class AnalysisProvider extends ChangeNotifier {
|
|||||||
required GroupingAnalyzerService groupingAnalyzerService,
|
required GroupingAnalyzerService groupingAnalyzerService,
|
||||||
required SessionRepository sessionRepository,
|
required SessionRepository sessionRepository,
|
||||||
DistortionCorrectionService? distortionService,
|
DistortionCorrectionService? distortionService,
|
||||||
}) : _detectionService = detectionService,
|
}) : _detectionService = detectionService,
|
||||||
_scoreCalculatorService = scoreCalculatorService,
|
_scoreCalculatorService = scoreCalculatorService,
|
||||||
_groupingAnalyzerService = groupingAnalyzerService,
|
_groupingAnalyzerService = groupingAnalyzerService,
|
||||||
_sessionRepository = sessionRepository,
|
_sessionRepository = sessionRepository,
|
||||||
_distortionService = distortionService ?? DistortionCorrectionService();
|
_distortionService = distortionService ?? DistortionCorrectionService();
|
||||||
|
|
||||||
AnalysisState _state = AnalysisState.initial;
|
AnalysisState _state = AnalysisState.initial;
|
||||||
String? _errorMessage;
|
String? _errorMessage;
|
||||||
@@ -80,7 +80,8 @@ class AnalysisProvider extends ChangeNotifier {
|
|||||||
double get targetCenterY => _targetCenterY;
|
double get targetCenterY => _targetCenterY;
|
||||||
double get targetRadius => _targetRadius;
|
double get targetRadius => _targetRadius;
|
||||||
int get ringCount => _ringCount;
|
int get ringCount => _ringCount;
|
||||||
List<double>? get ringRadii => _ringRadii != null ? List.unmodifiable(_ringRadii!) : null;
|
List<double>? get ringRadii =>
|
||||||
|
_ringRadii != null ? List.unmodifiable(_ringRadii!) : null;
|
||||||
double get imageAspectRatio => _imageAspectRatio;
|
double get imageAspectRatio => _imageAspectRatio;
|
||||||
List<Shot> get shots => List.unmodifiable(_shots);
|
List<Shot> get shots => List.unmodifiable(_shots);
|
||||||
ScoreResult? get scoreResult => _scoreResult;
|
ScoreResult? get scoreResult => _scoreResult;
|
||||||
@@ -97,13 +98,22 @@ class AnalysisProvider extends ChangeNotifier {
|
|||||||
DistortionParameters? get distortionParams => _distortionParams;
|
DistortionParameters? get distortionParams => _distortionParams;
|
||||||
String? get correctedImagePath => _correctedImagePath;
|
String? get correctedImagePath => _correctedImagePath;
|
||||||
bool get hasDistortion => _distortionParams?.needsCorrection ?? false;
|
bool get hasDistortion => _distortionParams?.needsCorrection ?? false;
|
||||||
|
|
||||||
/// Retourne le chemin de l'image à afficher (corrigée si activée, originale sinon)
|
/// Retourne le chemin de l'image à afficher (corrigée si activée, originale sinon)
|
||||||
String? get displayImagePath => _distortionCorrectionEnabled && _correctedImagePath != null
|
String? get displayImagePath =>
|
||||||
|
_distortionCorrectionEnabled && _correctedImagePath != null
|
||||||
? _correctedImagePath
|
? _correctedImagePath
|
||||||
: _imagePath;
|
: _imagePath;
|
||||||
|
|
||||||
/// Analyze an image
|
/// Analyze an image
|
||||||
Future<void> analyzeImage(String imagePath, TargetType targetType) async {
|
///
|
||||||
|
/// [autoAnalyze] determines if we should run automatic detection immediately.
|
||||||
|
/// If false, only the image is loaded and default target parameters are set.
|
||||||
|
Future<void> analyzeImage(
|
||||||
|
String imagePath,
|
||||||
|
TargetType targetType, {
|
||||||
|
bool autoAnalyze = true,
|
||||||
|
}) async {
|
||||||
_state = AnalysisState.loading;
|
_state = AnalysisState.loading;
|
||||||
_imagePath = imagePath;
|
_imagePath = imagePath;
|
||||||
_targetType = targetType;
|
_targetType = targetType;
|
||||||
@@ -119,6 +129,20 @@ class AnalysisProvider extends ChangeNotifier {
|
|||||||
_imageAspectRatio = frame.image.width / frame.image.height;
|
_imageAspectRatio = frame.image.width / frame.image.height;
|
||||||
frame.image.dispose();
|
frame.image.dispose();
|
||||||
|
|
||||||
|
if (!autoAnalyze) {
|
||||||
|
// Just setup default values without running detection
|
||||||
|
_targetCenterX = 0.5;
|
||||||
|
_targetCenterY = 0.5;
|
||||||
|
_targetRadius = 0.4;
|
||||||
|
|
||||||
|
// Initialize empty shots list
|
||||||
|
_shots = [];
|
||||||
|
|
||||||
|
_state = AnalysisState.success;
|
||||||
|
notifyListeners();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
// Detect target and impacts
|
// Detect target and impacts
|
||||||
final result = _detectionService.detectTarget(imagePath, targetType);
|
final result = _detectionService.detectTarget(imagePath, targetType);
|
||||||
|
|
||||||
@@ -162,13 +186,7 @@ class AnalysisProvider extends ChangeNotifier {
|
|||||||
/// Add a manual shot
|
/// Add a manual shot
|
||||||
void addShot(double x, double y) {
|
void addShot(double x, double y) {
|
||||||
final score = _calculateShotScore(x, y);
|
final score = _calculateShotScore(x, y);
|
||||||
final shot = Shot(
|
final shot = Shot(id: _uuid.v4(), x: x, y: y, score: score, sessionId: '');
|
||||||
id: _uuid.v4(),
|
|
||||||
x: x,
|
|
||||||
y: y,
|
|
||||||
score: score,
|
|
||||||
sessionId: '',
|
|
||||||
);
|
|
||||||
|
|
||||||
_shots.add(shot);
|
_shots.add(shot);
|
||||||
_recalculateScores();
|
_recalculateScores();
|
||||||
@@ -190,11 +208,7 @@ class AnalysisProvider extends ChangeNotifier {
|
|||||||
if (index == -1) return;
|
if (index == -1) return;
|
||||||
|
|
||||||
final newScore = _calculateShotScore(newX, newY);
|
final newScore = _calculateShotScore(newX, newY);
|
||||||
_shots[index] = _shots[index].copyWith(
|
_shots[index] = _shots[index].copyWith(x: newX, y: newY, score: newScore);
|
||||||
x: newX,
|
|
||||||
y: newY,
|
|
||||||
score: newScore,
|
|
||||||
);
|
|
||||||
|
|
||||||
_recalculateScores();
|
_recalculateScores();
|
||||||
_recalculateGrouping();
|
_recalculateGrouping();
|
||||||
@@ -254,16 +268,137 @@ class AnalysisProvider extends ChangeNotifier {
|
|||||||
return detectedImpacts.length;
|
return detectedImpacts.length;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Auto-detect impacts using OpenCV (Hough Circles + Contours)
|
||||||
|
///
|
||||||
|
/// NOTE: OpenCV est actuellement désactivé sur Windows en raison de problèmes
|
||||||
|
/// de compilation. Cette méthode retourne 0 (aucun impact détecté).
|
||||||
|
/// Utiliser autoDetectImpacts() à la place.
|
||||||
|
///
|
||||||
|
/// Utilise les algorithmes OpenCV pour une détection plus robuste:
|
||||||
|
/// - Transformation de Hough pour détecter les cercles
|
||||||
|
/// - Analyse de contours avec filtrage par circularité
|
||||||
|
Future<int> autoDetectImpactsWithOpenCV({
|
||||||
|
double cannyThreshold1 = 50,
|
||||||
|
double cannyThreshold2 = 150,
|
||||||
|
double minDist = 20,
|
||||||
|
double param1 = 100,
|
||||||
|
double param2 = 30,
|
||||||
|
int minRadius = 5,
|
||||||
|
int maxRadius = 50,
|
||||||
|
int blurSize = 5,
|
||||||
|
bool useContourDetection = true,
|
||||||
|
double minCircularity = 0.6,
|
||||||
|
double minContourArea = 50,
|
||||||
|
double maxContourArea = 5000,
|
||||||
|
bool clearExisting = false,
|
||||||
|
}) async {
|
||||||
|
if (_imagePath == null || _targetType == null) return 0;
|
||||||
|
|
||||||
|
final settings = OpenCVDetectionSettings(
|
||||||
|
cannyThreshold1: cannyThreshold1,
|
||||||
|
cannyThreshold2: cannyThreshold2,
|
||||||
|
minDist: minDist,
|
||||||
|
param1: param1,
|
||||||
|
param2: param2,
|
||||||
|
minRadius: minRadius,
|
||||||
|
maxRadius: maxRadius,
|
||||||
|
blurSize: blurSize,
|
||||||
|
useContourDetection: useContourDetection,
|
||||||
|
minCircularity: minCircularity,
|
||||||
|
minContourArea: minContourArea,
|
||||||
|
maxContourArea: maxContourArea,
|
||||||
|
);
|
||||||
|
|
||||||
|
final detectedImpacts = _detectionService.detectImpactsWithOpenCV(
|
||||||
|
_imagePath!,
|
||||||
|
_targetType!,
|
||||||
|
_targetCenterX,
|
||||||
|
_targetCenterY,
|
||||||
|
_targetRadius,
|
||||||
|
_ringCount,
|
||||||
|
settings: settings,
|
||||||
|
);
|
||||||
|
|
||||||
|
if (clearExisting) {
|
||||||
|
_shots.clear();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add detected impacts as shots
|
||||||
|
for (final impact in detectedImpacts) {
|
||||||
|
final score = _calculateShotScore(impact.x, impact.y);
|
||||||
|
final shot = Shot(
|
||||||
|
id: _uuid.v4(),
|
||||||
|
x: impact.x,
|
||||||
|
y: impact.y,
|
||||||
|
score: score,
|
||||||
|
sessionId: '',
|
||||||
|
);
|
||||||
|
_shots.add(shot);
|
||||||
|
}
|
||||||
|
|
||||||
|
_recalculateScores();
|
||||||
|
_recalculateGrouping();
|
||||||
|
notifyListeners();
|
||||||
|
|
||||||
|
return detectedImpacts.length;
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Detect impacts with OpenCV using reference points
|
||||||
|
Future<int> detectFromReferencesWithOpenCV({
|
||||||
|
double tolerance = 2.0,
|
||||||
|
bool clearExisting = false,
|
||||||
|
}) async {
|
||||||
|
if (_imagePath == null ||
|
||||||
|
_targetType == null ||
|
||||||
|
_referenceImpacts.length < 2) {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convertir les références
|
||||||
|
final references = _referenceImpacts
|
||||||
|
.map((shot) => ReferenceImpact(x: shot.x, y: shot.y))
|
||||||
|
.toList();
|
||||||
|
|
||||||
|
final detectedImpacts = _detectionService
|
||||||
|
.detectImpactsWithOpenCVFromReferences(
|
||||||
|
_imagePath!,
|
||||||
|
_targetType!,
|
||||||
|
_targetCenterX,
|
||||||
|
_targetCenterY,
|
||||||
|
_targetRadius,
|
||||||
|
_ringCount,
|
||||||
|
references,
|
||||||
|
tolerance: tolerance,
|
||||||
|
);
|
||||||
|
|
||||||
|
if (clearExisting) {
|
||||||
|
_shots.clear();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add detected impacts as shots
|
||||||
|
for (final impact in detectedImpacts) {
|
||||||
|
final score = _calculateShotScore(impact.x, impact.y);
|
||||||
|
final shot = Shot(
|
||||||
|
id: _uuid.v4(),
|
||||||
|
x: impact.x,
|
||||||
|
y: impact.y,
|
||||||
|
score: score,
|
||||||
|
sessionId: '',
|
||||||
|
);
|
||||||
|
_shots.add(shot);
|
||||||
|
}
|
||||||
|
|
||||||
|
_recalculateScores();
|
||||||
|
_recalculateGrouping();
|
||||||
|
notifyListeners();
|
||||||
|
|
||||||
|
return detectedImpacts.length;
|
||||||
|
}
|
||||||
|
|
||||||
/// Add a reference impact for calibrated detection
|
/// Add a reference impact for calibrated detection
|
||||||
void addReferenceImpact(double x, double y) {
|
void addReferenceImpact(double x, double y) {
|
||||||
final score = _calculateShotScore(x, y);
|
final score = _calculateShotScore(x, y);
|
||||||
final shot = Shot(
|
final shot = Shot(id: _uuid.v4(), x: x, y: y, score: score, sessionId: '');
|
||||||
id: _uuid.v4(),
|
|
||||||
x: x,
|
|
||||||
y: y,
|
|
||||||
score: score,
|
|
||||||
sessionId: '',
|
|
||||||
);
|
|
||||||
_referenceImpacts.add(shot);
|
_referenceImpacts.add(shot);
|
||||||
notifyListeners();
|
notifyListeners();
|
||||||
}
|
}
|
||||||
@@ -304,7 +439,9 @@ class AnalysisProvider extends ChangeNotifier {
|
|||||||
double tolerance = 2.0,
|
double tolerance = 2.0,
|
||||||
bool clearExisting = false,
|
bool clearExisting = false,
|
||||||
}) async {
|
}) async {
|
||||||
if (_imagePath == null || _targetType == null || _learnedCharacteristics == null) {
|
if (_imagePath == null ||
|
||||||
|
_targetType == null ||
|
||||||
|
_learnedCharacteristics == null) {
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -344,7 +481,13 @@ class AnalysisProvider extends ChangeNotifier {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Adjust target position
|
/// Adjust target position
|
||||||
void adjustTargetPosition(double centerX, double centerY, double radius, {int? ringCount, List<double>? ringRadii}) {
|
void adjustTargetPosition(
|
||||||
|
double centerX,
|
||||||
|
double centerY,
|
||||||
|
double radius, {
|
||||||
|
int? ringCount,
|
||||||
|
List<double>? ringRadii,
|
||||||
|
}) {
|
||||||
_targetCenterX = centerX;
|
_targetCenterX = centerX;
|
||||||
_targetCenterY = centerY;
|
_targetCenterY = centerY;
|
||||||
_targetRadius = radius;
|
_targetRadius = radius;
|
||||||
@@ -405,6 +548,44 @@ class AnalysisProvider extends ChangeNotifier {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* version deux a tester*/
|
||||||
|
/// Calcule ET applique la correction pour un feedback immédiat
|
||||||
|
Future<void> calculateAndApplyDistortion() async {
|
||||||
|
// 1. Calcul des paramètres (votre code actuel)
|
||||||
|
_distortionParams = _distortionService.calculateDistortionFromCalibration(
|
||||||
|
targetCenterX: _targetCenterX,
|
||||||
|
targetCenterY: _targetCenterY,
|
||||||
|
targetRadius: _targetRadius,
|
||||||
|
imageAspectRatio: _imageAspectRatio,
|
||||||
|
);
|
||||||
|
|
||||||
|
// 2. Vérification si une correction est réellement nécessaire
|
||||||
|
if (_distortionParams != null && _distortionParams!.needsCorrection) {
|
||||||
|
// 3. Application immédiate de la transformation (méthode asynchrone)
|
||||||
|
await applyDistortionCorrection();
|
||||||
|
} else {
|
||||||
|
notifyListeners(); // On prévient quand même si pas de correction
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Future<void> runFullDistortionWorkflow() async {
|
||||||
|
_state = AnalysisState.loading; // Affiche un spinner sur votre UI
|
||||||
|
notifyListeners();
|
||||||
|
|
||||||
|
try {
|
||||||
|
calculateDistortion(); // Calcule les paramètres
|
||||||
|
await applyDistortionCorrection(); // Génère le fichier corrigé
|
||||||
|
_distortionCorrectionEnabled = true; // Active l'affichage
|
||||||
|
_state = AnalysisState.success;
|
||||||
|
} catch (e) {
|
||||||
|
_errorMessage = "Erreur de rendu : $e";
|
||||||
|
_state = AnalysisState.error;
|
||||||
|
} finally {
|
||||||
|
notifyListeners();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
/* fin section deux a tester*/
|
||||||
|
|
||||||
int _calculateShotScore(double x, double y) {
|
int _calculateShotScore(double x, double y) {
|
||||||
if (_targetType == TargetType.concentric) {
|
if (_targetType == TargetType.concentric) {
|
||||||
return _scoreCalculatorService.calculateConcentricScore(
|
return _scoreCalculatorService.calculateConcentricScore(
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -39,15 +39,16 @@ class _CaptureScreenState extends State<CaptureScreen> {
|
|||||||
child: Column(
|
child: Column(
|
||||||
crossAxisAlignment: CrossAxisAlignment.stretch,
|
crossAxisAlignment: CrossAxisAlignment.stretch,
|
||||||
children: [
|
children: [
|
||||||
|
// TODO: une fois la cible de silhouette mise en place, rajouter le selecteur
|
||||||
// Target type selection
|
// Target type selection
|
||||||
_buildSectionTitle('Type de Cible'),
|
// _buildSectionTitle('Type de Cible'),
|
||||||
const SizedBox(height: 12),
|
// const SizedBox(height: 12),
|
||||||
TargetTypeSelector(
|
// TargetTypeSelector(
|
||||||
selectedType: _selectedType,
|
// selectedType: _selectedType,
|
||||||
onTypeSelected: (type) {
|
// onTypeSelected: (type) {
|
||||||
setState(() => _selectedType = type);
|
// setState(() => _selectedType = type);
|
||||||
},
|
// },
|
||||||
),
|
// ),
|
||||||
const SizedBox(height: AppConstants.largePadding),
|
const SizedBox(height: AppConstants.largePadding),
|
||||||
|
|
||||||
// Image source selection
|
// Image source selection
|
||||||
|
|||||||
@@ -148,21 +148,27 @@ class _HomeScreenState extends State<HomeScreen> {
|
|||||||
Text(
|
Text(
|
||||||
'Statistiques',
|
'Statistiques',
|
||||||
style: Theme.of(context).textTheme.titleLarge?.copyWith(
|
style: Theme.of(context).textTheme.titleLarge?.copyWith(
|
||||||
fontWeight: FontWeight.bold,
|
fontWeight: FontWeight.bold,
|
||||||
),
|
),
|
||||||
),
|
),
|
||||||
const SizedBox(height: 12),
|
const SizedBox(height: 12),
|
||||||
Row(
|
Row(
|
||||||
children: [
|
children: [
|
||||||
|
// --- BOUTON SESSIONS (Redirige vers Statistiques) ---
|
||||||
Expanded(
|
Expanded(
|
||||||
child: StatsCard(
|
child: InkWell(
|
||||||
icon: Icons.assessment,
|
onTap: () => _navigateToStatistics(context),
|
||||||
title: 'Sessions',
|
borderRadius: BorderRadius.circular(AppConstants.borderRadius),
|
||||||
value: '${_stats!['totalSessions']}',
|
child: StatsCard(
|
||||||
color: AppTheme.primaryColor,
|
icon: Icons.assessment,
|
||||||
|
title: 'Sessions',
|
||||||
|
value: '${_stats!['totalSessions']}',
|
||||||
|
color: AppTheme.primaryColor,
|
||||||
|
),
|
||||||
),
|
),
|
||||||
),
|
),
|
||||||
const SizedBox(width: 12),
|
const SizedBox(width: 12),
|
||||||
|
// Ce bouton reste statique (ou tu peux ajouter une action)
|
||||||
Expanded(
|
Expanded(
|
||||||
child: StatsCard(
|
child: StatsCard(
|
||||||
icon: Icons.gps_fixed,
|
icon: Icons.gps_fixed,
|
||||||
@@ -176,15 +182,21 @@ class _HomeScreenState extends State<HomeScreen> {
|
|||||||
const SizedBox(height: 12),
|
const SizedBox(height: 12),
|
||||||
Row(
|
Row(
|
||||||
children: [
|
children: [
|
||||||
|
// --- BOUTON SCORE MOYEN (Redirige vers Historique) ---
|
||||||
Expanded(
|
Expanded(
|
||||||
child: StatsCard(
|
child: InkWell(
|
||||||
icon: Icons.trending_up,
|
onTap: () => _navigateToHistory(context),
|
||||||
title: 'Score Moyen',
|
borderRadius: BorderRadius.circular(AppConstants.borderRadius),
|
||||||
value: (_stats!['averageScore'] as double).toStringAsFixed(1),
|
child: StatsCard(
|
||||||
color: AppTheme.warningColor,
|
icon: Icons.trending_up,
|
||||||
|
title: 'Historique',
|
||||||
|
value: (_stats!['averageScore'] as double).toStringAsFixed(1),
|
||||||
|
color: AppTheme.warningColor,
|
||||||
|
),
|
||||||
),
|
),
|
||||||
),
|
),
|
||||||
const SizedBox(width: 12),
|
const SizedBox(width: 12),
|
||||||
|
// Ce bouton reste statique
|
||||||
Expanded(
|
Expanded(
|
||||||
child: StatsCard(
|
child: StatsCard(
|
||||||
icon: Icons.emoji_events,
|
icon: Icons.emoji_events,
|
||||||
|
|||||||
@@ -402,13 +402,64 @@ class DistortionCorrectionService {
|
|||||||
return h;
|
return h;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Résout le système linéaire pour trouver la matrice d'homographie 3x3.
|
||||||
|
/// Utilise l'élimination de Gauss-Jordan avec pivot partiel pour la stabilité.
|
||||||
List<double> _solveHomography(List<List<double>> a) {
|
List<double> _solveHomography(List<List<double>> a) {
|
||||||
// Implémentation simplifiée - normalisation et résolution
|
// Le système 'a' est de taille 8x9 (8 équations, 9 inconnues).
|
||||||
// En pratique, on devrait utiliser une vraie décomposition SVD
|
// On fixe h8 = 1.0 pour résoudre le système, ce qui nous donne un système 8x8.
|
||||||
|
final int n = 8;
|
||||||
|
final List<List<double>> matrix = List.generate(n, (i) => List<double>.from(a[i]));
|
||||||
|
|
||||||
|
// Vecteur B (les constantes de l'autre côté de l'égalité)
|
||||||
|
// Dans DLT, -h8 * dx (ou dy) devient le terme constant.
|
||||||
|
final List<double> b = List.generate(n, (i) => -matrix[i][8]);
|
||||||
|
|
||||||
// Pour l'instant, retourner une matrice identité
|
// Élimination de Gauss-Jordan
|
||||||
// TODO: Implémenter une vraie résolution
|
for (int i = 0; i < n; i++) {
|
||||||
return [1, 0, 0, 0, 1, 0, 0, 0, 1];
|
// Recherche du pivot (valeur maximale dans la colonne pour limiter les erreurs)
|
||||||
|
int pivot = i;
|
||||||
|
for (int j = i + 1; j < n; j++) {
|
||||||
|
if (matrix[j][i].abs() > matrix[pivot][i].abs()) {
|
||||||
|
pivot = j;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Échange des lignes (si nécessaire)
|
||||||
|
final List<double> tempRow = matrix[i];
|
||||||
|
matrix[i] = matrix[pivot];
|
||||||
|
matrix[pivot] = tempRow;
|
||||||
|
|
||||||
|
final double tempB = b[i];
|
||||||
|
b[i] = b[pivot];
|
||||||
|
b[pivot] = tempB;
|
||||||
|
|
||||||
|
// Vérification de la singularité (division par zéro impossible)
|
||||||
|
if (matrix[i][i].abs() < 1e-10) {
|
||||||
|
return [1, 0, 0, 0, 1, 0, 0, 0, 1]; // Retourne identité si échec
|
||||||
|
}
|
||||||
|
|
||||||
|
// Normalisation de la ligne pivot
|
||||||
|
for (int j = i + 1; j < n; j++) {
|
||||||
|
final double factor = matrix[j][i] / matrix[i][i];
|
||||||
|
b[j] -= factor * b[i];
|
||||||
|
for (int k = i; k < n; k++) {
|
||||||
|
matrix[j][k] -= factor * matrix[i][k];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Substitution arrière
|
||||||
|
final List<double> h = List.filled(9, 0.0);
|
||||||
|
for (int i = n - 1; i >= 0; i--) {
|
||||||
|
double sum = 0.0;
|
||||||
|
for (int j = i + 1; j < n; j++) {
|
||||||
|
sum += matrix[i][j] * h[j];
|
||||||
|
}
|
||||||
|
h[i] = (b[i] - sum) / matrix[i][i];
|
||||||
|
}
|
||||||
|
|
||||||
|
h[8] = 1.0; // Normalisation finale
|
||||||
|
return h;
|
||||||
}
|
}
|
||||||
|
|
||||||
({double x, double y}) _applyPerspectiveTransform(List<double> h, double x, double y) {
|
({double x, double y}) _applyPerspectiveTransform(List<double> h, double x, double y) {
|
||||||
|
|||||||
@@ -196,10 +196,11 @@ class ImageProcessingService {
|
|||||||
|
|
||||||
/// Analyze reference impacts to learn their characteristics
|
/// Analyze reference impacts to learn their characteristics
|
||||||
/// This actually finds the blob at each reference point and extracts its real properties
|
/// This actually finds the blob at each reference point and extracts its real properties
|
||||||
|
/// AMÉLIORÉ : Recherche plus large et analyse plus robuste
|
||||||
ImpactCharacteristics? analyzeReferenceImpacts(
|
ImpactCharacteristics? analyzeReferenceImpacts(
|
||||||
String imagePath,
|
String imagePath,
|
||||||
List<ReferenceImpact> references, {
|
List<ReferenceImpact> references, {
|
||||||
int searchRadius = 30,
|
int searchRadius = 50, // Augmenté de 30 à 50
|
||||||
}) {
|
}) {
|
||||||
if (references.length < 2) return null;
|
if (references.length < 2) return null;
|
||||||
|
|
||||||
@@ -209,10 +210,10 @@ class ImageProcessingService {
|
|||||||
final originalImage = img.decodeImage(bytes);
|
final originalImage = img.decodeImage(bytes);
|
||||||
if (originalImage == null) return null;
|
if (originalImage == null) return null;
|
||||||
|
|
||||||
// Resize for faster processing
|
// Resize for faster processing - taille augmentée
|
||||||
img.Image image;
|
img.Image image;
|
||||||
double scale = 1.0;
|
double scale = 1.0;
|
||||||
final maxDimension = 1000;
|
final maxDimension = 1200; // Augmenté pour plus de précision
|
||||||
if (originalImage.width > maxDimension || originalImage.height > maxDimension) {
|
if (originalImage.width > maxDimension || originalImage.height > maxDimension) {
|
||||||
scale = maxDimension / math.max(originalImage.width, originalImage.height);
|
scale = maxDimension / math.max(originalImage.width, originalImage.height);
|
||||||
image = img.copyResize(
|
image = img.copyResize(
|
||||||
@@ -235,45 +236,67 @@ class ImageProcessingService {
|
|||||||
final fillRatios = <double>[];
|
final fillRatios = <double>[];
|
||||||
final thresholds = <double>[];
|
final thresholds = <double>[];
|
||||||
|
|
||||||
for (final ref in references) {
|
print('Analyzing ${references.length} reference impacts...');
|
||||||
|
|
||||||
|
for (int refIndex = 0; refIndex < references.length; refIndex++) {
|
||||||
|
final ref = references[refIndex];
|
||||||
final centerX = (ref.x * width).round().clamp(0, width - 1);
|
final centerX = (ref.x * width).round().clamp(0, width - 1);
|
||||||
final centerY = (ref.y * height).round().clamp(0, height - 1);
|
final centerY = (ref.y * height).round().clamp(0, height - 1);
|
||||||
|
|
||||||
// Find the darkest point in the search area (the center of the impact)
|
print('Reference $refIndex at ($centerX, $centerY)');
|
||||||
|
|
||||||
|
// AMÉLIORATION : Recherche du point le plus sombre dans une zone plus large
|
||||||
int darkestX = centerX;
|
int darkestX = centerX;
|
||||||
int darkestY = centerY;
|
int darkestY = centerY;
|
||||||
double darkestLum = 255;
|
double darkestLum = 255;
|
||||||
|
|
||||||
for (int dy = -searchRadius; dy <= searchRadius; dy++) {
|
// Recherche en spirale du point le plus sombre
|
||||||
for (int dx = -searchRadius; dx <= searchRadius; dx++) {
|
for (int r = 0; r <= searchRadius; r++) {
|
||||||
final px = centerX + dx;
|
for (int dy = -r; dy <= r; dy++) {
|
||||||
final py = centerY + dy;
|
for (int dx = -r; dx <= r; dx++) {
|
||||||
if (px < 0 || px >= width || py < 0 || py >= height) continue;
|
// Seulement le périmètre du carré pour éviter les doublons
|
||||||
|
if (r > 0 && math.max(dx.abs(), dy.abs()) < r) continue;
|
||||||
|
|
||||||
final pixel = blurred.getPixel(px, py);
|
final px = centerX + dx;
|
||||||
final lum = img.getLuminance(pixel).toDouble();
|
final py = centerY + dy;
|
||||||
if (lum < darkestLum) {
|
if (px < 0 || px >= width || py < 0 || py >= height) continue;
|
||||||
darkestLum = lum;
|
|
||||||
darkestX = px;
|
final pixel = blurred.getPixel(px, py);
|
||||||
darkestY = py;
|
final lum = img.getLuminance(pixel).toDouble();
|
||||||
|
if (lum < darkestLum) {
|
||||||
|
darkestLum = lum;
|
||||||
|
darkestX = px;
|
||||||
|
darkestY = py;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Si on a trouvé un point très sombre, on peut s'arrêter
|
||||||
|
if (darkestLum < 50 && r > 5) break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
print(' Darkest point at ($darkestX, $darkestY), lum=$darkestLum');
|
||||||
|
|
||||||
// Now find the blob at the darkest point using adaptive threshold
|
// Now find the blob at the darkest point using adaptive threshold
|
||||||
// Start from the darkest point and expand until we find the boundary
|
|
||||||
final blobResult = _findBlobAtPoint(blurred, darkestX, darkestY, width, height);
|
final blobResult = _findBlobAtPoint(blurred, darkestX, darkestY, width, height);
|
||||||
|
|
||||||
if (blobResult != null) {
|
if (blobResult != null && blobResult.size >= 10) { // Au moins 10 pixels
|
||||||
luminances.add(blobResult.avgLuminance);
|
luminances.add(blobResult.avgLuminance);
|
||||||
sizes.add(blobResult.size.toDouble());
|
sizes.add(blobResult.size.toDouble());
|
||||||
circularities.add(blobResult.circularity);
|
circularities.add(blobResult.circularity);
|
||||||
fillRatios.add(blobResult.fillRatio);
|
fillRatios.add(blobResult.fillRatio);
|
||||||
thresholds.add(blobResult.threshold);
|
thresholds.add(blobResult.threshold);
|
||||||
|
print(' Found blob: size=${blobResult.size}, circ=${blobResult.circularity.toStringAsFixed(2)}, '
|
||||||
|
'fill=${blobResult.fillRatio.toStringAsFixed(2)}, threshold=${blobResult.threshold.toStringAsFixed(0)}');
|
||||||
|
} else {
|
||||||
|
print(' No valid blob found at this reference');
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (luminances.isEmpty) return null;
|
if (luminances.isEmpty) {
|
||||||
|
print('ERROR: No valid blobs found from any reference!');
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
// Calculate statistics
|
// Calculate statistics
|
||||||
final avgLum = luminances.reduce((a, b) => a + b) / luminances.length;
|
final avgLum = luminances.reduce((a, b) => a + b) / luminances.length;
|
||||||
@@ -290,17 +313,25 @@ class ImageProcessingService {
|
|||||||
sizeVariance += math.pow(sizes[i] - avgSize, 2);
|
sizeVariance += math.pow(sizes[i] - avgSize, 2);
|
||||||
}
|
}
|
||||||
final lumStdDev = math.sqrt(lumVariance / luminances.length);
|
final lumStdDev = math.sqrt(lumVariance / luminances.length);
|
||||||
final sizeStdDev = math.sqrt(sizeVariance / sizes.length);
|
// AMÉLIORATION : Écart-type minimum pour éviter des plages trop étroites
|
||||||
|
final sizeStdDev = math.max(
|
||||||
|
math.sqrt(sizeVariance / sizes.length),
|
||||||
|
avgSize * 0.3, // Au moins 30% de variance
|
||||||
|
);
|
||||||
|
|
||||||
return ImpactCharacteristics(
|
final result = ImpactCharacteristics(
|
||||||
avgLuminance: avgLum,
|
avgLuminance: avgLum,
|
||||||
luminanceStdDev: lumStdDev,
|
luminanceStdDev: math.max(lumStdDev, 10), // Minimum 10 de variance
|
||||||
avgSize: avgSize,
|
avgSize: avgSize,
|
||||||
sizeStdDev: sizeStdDev,
|
sizeStdDev: sizeStdDev,
|
||||||
avgCircularity: avgCirc,
|
avgCircularity: avgCirc,
|
||||||
avgFillRatio: avgFill,
|
avgFillRatio: avgFill,
|
||||||
avgDarkThreshold: avgThreshold,
|
avgDarkThreshold: avgThreshold,
|
||||||
);
|
);
|
||||||
|
|
||||||
|
print('Learned characteristics: $result');
|
||||||
|
|
||||||
|
return result;
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
print('Error analyzing reference impacts: $e');
|
print('Error analyzing reference impacts: $e');
|
||||||
return null;
|
return null;
|
||||||
@@ -308,25 +339,30 @@ class ImageProcessingService {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Find a blob at a specific point and extract its characteristics
|
/// Find a blob at a specific point and extract its characteristics
|
||||||
|
/// AMÉLIORÉ : Utilise plusieurs méthodes de détection et retourne le meilleur résultat
|
||||||
_BlobAnalysis? _findBlobAtPoint(img.Image image, int startX, int startY, int width, int height) {
|
_BlobAnalysis? _findBlobAtPoint(img.Image image, int startX, int startY, int width, int height) {
|
||||||
// Get the luminance at the center point
|
// Get the luminance at the center point
|
||||||
final centerPixel = image.getPixel(startX, startY);
|
final centerPixel = image.getPixel(startX, startY);
|
||||||
final centerLum = img.getLuminance(centerPixel).toDouble();
|
final centerLum = img.getLuminance(centerPixel).toDouble();
|
||||||
|
|
||||||
// Find the threshold by looking at the luminance gradient around the point
|
// MÉTHODE 1 : Expansion radiale pour trouver le bord
|
||||||
// Sample in expanding circles to find where the blob ends
|
|
||||||
double sumLum = centerLum;
|
double sumLum = centerLum;
|
||||||
int pixelCount = 1;
|
int pixelCount = 1;
|
||||||
double maxRadius = 0;
|
double maxRadius = 0;
|
||||||
|
|
||||||
// Sample at different radii to find the edge
|
// Collecter les luminances à différents rayons pour une analyse plus robuste
|
||||||
for (int r = 1; r <= 50; r++) {
|
final radialLuminances = <double>[];
|
||||||
|
|
||||||
|
// Sample at different radii to find the edge - LIMITE RAISONNABLE pour impacts de balle
|
||||||
|
final maxSearchRadius = 60; // Un impact de balle ne fait pas plus de 60 pixels de rayon
|
||||||
|
for (int r = 1; r <= maxSearchRadius; r++) {
|
||||||
double ringSum = 0;
|
double ringSum = 0;
|
||||||
int ringCount = 0;
|
int ringCount = 0;
|
||||||
|
|
||||||
// Sample points on a ring
|
// Sample points on a ring
|
||||||
for (int i = 0; i < 16; i++) {
|
final numSamples = math.max(12, r ~/ 2);
|
||||||
final angle = (i / 16) * 2 * math.pi;
|
for (int i = 0; i < numSamples; i++) {
|
||||||
|
final angle = (i / numSamples) * 2 * math.pi;
|
||||||
final px = startX + (r * math.cos(angle)).round();
|
final px = startX + (r * math.cos(angle)).round();
|
||||||
final py = startY + (r * math.sin(angle)).round();
|
final py = startY + (r * math.sin(angle)).round();
|
||||||
if (px < 0 || px >= width || py < 0 || py >= height) continue;
|
if (px < 0 || px >= width || py < 0 || py >= height) continue;
|
||||||
@@ -339,20 +375,47 @@ class ImageProcessingService {
|
|||||||
|
|
||||||
if (ringCount > 0) {
|
if (ringCount > 0) {
|
||||||
final avgRingLum = ringSum / ringCount;
|
final avgRingLum = ringSum / ringCount;
|
||||||
// If the ring is significantly brighter than the center, we've found the edge
|
radialLuminances.add(avgRingLum);
|
||||||
if (avgRingLum > centerLum + 40) {
|
|
||||||
|
// Détection du bord : gradient de luminosité significatif
|
||||||
|
// Seuil adaptatif basé sur la différence avec le centre
|
||||||
|
final luminanceDiff = avgRingLum - centerLum;
|
||||||
|
|
||||||
|
// Le bord est trouvé quand on a une augmentation significative de luminosité
|
||||||
|
if (luminanceDiff > 30 && maxRadius == 0) {
|
||||||
maxRadius = r.toDouble();
|
maxRadius = r.toDouble();
|
||||||
break;
|
break; // Arrêter dès qu'on trouve le bord
|
||||||
|
}
|
||||||
|
|
||||||
|
if (maxRadius == 0) {
|
||||||
|
sumLum += ringSum;
|
||||||
|
pixelCount += ringCount;
|
||||||
}
|
}
|
||||||
sumLum += ringSum;
|
|
||||||
pixelCount += ringCount;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (maxRadius < 3) return null; // Too small to be a valid blob
|
// Si aucun bord trouvé, chercher le gradient maximum
|
||||||
|
if (maxRadius < 2 && radialLuminances.length > 3) {
|
||||||
|
double maxGradient = 0;
|
||||||
|
int maxGradientIndex = 0;
|
||||||
|
for (int i = 1; i < radialLuminances.length; i++) {
|
||||||
|
final gradient = radialLuminances[i] - radialLuminances[i - 1];
|
||||||
|
if (gradient > maxGradient) {
|
||||||
|
maxGradient = gradient;
|
||||||
|
maxGradientIndex = i;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (maxGradient > 10) {
|
||||||
|
maxRadius = (maxGradientIndex + 1).toDouble();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Calculate threshold as the midpoint between center and edge luminance
|
// Rayon minimum de 3 pixels, maximum de 50 pour un impact de balle
|
||||||
final edgeRadius = (maxRadius * 1.2).round();
|
if (maxRadius < 3) maxRadius = 3;
|
||||||
|
if (maxRadius > 50) maxRadius = 50;
|
||||||
|
|
||||||
|
// Calculate threshold as weighted average between center and edge luminance
|
||||||
|
final edgeRadius = math.min((maxRadius * 1.2).round(), maxSearchRadius - 1);
|
||||||
double edgeLum = 0;
|
double edgeLum = 0;
|
||||||
int edgeCount = 0;
|
int edgeCount = 0;
|
||||||
for (int i = 0; i < 16; i++) {
|
for (int i = 0; i < 16; i++) {
|
||||||
@@ -366,62 +429,94 @@ class ImageProcessingService {
|
|||||||
}
|
}
|
||||||
if (edgeCount > 0) {
|
if (edgeCount > 0) {
|
||||||
edgeLum /= edgeCount;
|
edgeLum /= edgeCount;
|
||||||
|
} else {
|
||||||
|
edgeLum = centerLum + 50;
|
||||||
}
|
}
|
||||||
|
|
||||||
final threshold = ((centerLum + edgeLum) / 2).round();
|
// Calculer le seuil optimal
|
||||||
|
final threshold = ((centerLum + edgeLum) / 2).round().clamp(20, 200);
|
||||||
|
|
||||||
// Now do a flood fill with this threshold to get the actual blob
|
// Utiliser une zone de recherche locale limitée autour du point
|
||||||
final mask = List.generate(height, (_) => List.filled(width, false));
|
final analysis = _tryFindBlobWithThresholdLocal(
|
||||||
for (int y = 0; y < height; y++) {
|
image, startX, startY, width, height, threshold, sumLum / pixelCount,
|
||||||
for (int x = 0; x < width; x++) {
|
maxRadius.round() + 10, // Zone de recherche légèrement plus grande que le rayon détecté
|
||||||
final pixel = image.getPixel(x, y);
|
);
|
||||||
|
|
||||||
|
return analysis;
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Trouve un blob avec un seuil dans une zone locale limitée
|
||||||
|
_BlobAnalysis? _tryFindBlobWithThresholdLocal(
|
||||||
|
img.Image image,
|
||||||
|
int startX,
|
||||||
|
int startY,
|
||||||
|
int width,
|
||||||
|
int height,
|
||||||
|
int threshold,
|
||||||
|
double avgLuminance,
|
||||||
|
int maxSearchRadius,
|
||||||
|
) {
|
||||||
|
// Limiter la zone de recherche
|
||||||
|
final minX = math.max(0, startX - maxSearchRadius);
|
||||||
|
final maxX = math.min(width - 1, startX + maxSearchRadius);
|
||||||
|
final minY = math.max(0, startY - maxSearchRadius);
|
||||||
|
final maxY = math.min(height - 1, startY + maxSearchRadius);
|
||||||
|
|
||||||
|
final localWidth = maxX - minX + 1;
|
||||||
|
final localHeight = maxY - minY + 1;
|
||||||
|
|
||||||
|
// Create binary mask ONLY for the local region
|
||||||
|
final mask = List.generate(localHeight, (_) => List.filled(localWidth, false));
|
||||||
|
for (int y = 0; y < localHeight; y++) {
|
||||||
|
for (int x = 0; x < localWidth; x++) {
|
||||||
|
final globalX = minX + x;
|
||||||
|
final globalY = minY + y;
|
||||||
|
final pixel = image.getPixel(globalX, globalY);
|
||||||
final lum = img.getLuminance(pixel);
|
final lum = img.getLuminance(pixel);
|
||||||
mask[y][x] = lum < threshold;
|
mask[y][x] = lum < threshold;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
final visited = List.generate(height, (_) => List.filled(width, false));
|
final visited = List.generate(localHeight, (_) => List.filled(localWidth, false));
|
||||||
|
|
||||||
// Find the blob containing the start point
|
// Find the blob containing the start point (in local coordinates)
|
||||||
if (!mask[startY][startX]) {
|
final localStartX = startX - minX;
|
||||||
|
final localStartY = startY - minY;
|
||||||
|
|
||||||
|
int searchX = localStartX;
|
||||||
|
int searchY = localStartY;
|
||||||
|
|
||||||
|
if (!mask[localStartY][localStartX]) {
|
||||||
// Start point might not be in mask, find nearest point that is
|
// Start point might not be in mask, find nearest point that is
|
||||||
for (int r = 1; r <= 10; r++) {
|
bool found = false;
|
||||||
bool found = false;
|
for (int r = 1; r <= 15 && !found; r++) {
|
||||||
for (int dy = -r; dy <= r && !found; dy++) {
|
for (int dy = -r; dy <= r && !found; dy++) {
|
||||||
for (int dx = -r; dx <= r && !found; dx++) {
|
for (int dx = -r; dx <= r && !found; dx++) {
|
||||||
final px = startX + dx;
|
final px = localStartX + dx;
|
||||||
final py = startY + dy;
|
final py = localStartY + dy;
|
||||||
if (px >= 0 && px < width && py >= 0 && py < height && mask[py][px]) {
|
if (px >= 0 && px < localWidth && py >= 0 && py < localHeight && mask[py][px]) {
|
||||||
final blob = _floodFill(mask, visited, px, py, width, height);
|
searchX = px;
|
||||||
|
searchY = py;
|
||||||
// Calculate fill ratio: actual pixels / bounding circle area
|
found = true;
|
||||||
final boundingRadius = math.max(blob.radius, 1);
|
|
||||||
final boundingCircleArea = math.pi * boundingRadius * boundingRadius;
|
|
||||||
final fillRatio = (blob.size / boundingCircleArea).clamp(0.0, 1.0);
|
|
||||||
|
|
||||||
return _BlobAnalysis(
|
|
||||||
avgLuminance: sumLum / pixelCount,
|
|
||||||
size: blob.size,
|
|
||||||
circularity: blob.circularity,
|
|
||||||
fillRatio: fillRatio,
|
|
||||||
threshold: threshold.toDouble(),
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return null;
|
if (!found) return null;
|
||||||
}
|
}
|
||||||
|
|
||||||
final blob = _floodFill(mask, visited, startX, startY, width, height);
|
final blob = _floodFillLocal(mask, visited, searchX, searchY, localWidth, localHeight);
|
||||||
|
|
||||||
// Calculate fill ratio
|
// Vérifier que le blob est valide - taille raisonnable pour un impact
|
||||||
|
if (blob.size < 10 || blob.size > 5000) return null; // Entre 10 et 5000 pixels
|
||||||
|
|
||||||
|
// Calculate fill ratio: actual pixels / bounding circle area
|
||||||
final boundingRadius = math.max(blob.radius, 1);
|
final boundingRadius = math.max(blob.radius, 1);
|
||||||
final boundingCircleArea = math.pi * boundingRadius * boundingRadius;
|
final boundingCircleArea = math.pi * boundingRadius * boundingRadius;
|
||||||
final fillRatio = (blob.size / boundingCircleArea).clamp(0.0, 1.0);
|
final fillRatio = (blob.size / boundingCircleArea).clamp(0.0, 1.0);
|
||||||
|
|
||||||
return _BlobAnalysis(
|
return _BlobAnalysis(
|
||||||
avgLuminance: sumLum / pixelCount,
|
avgLuminance: avgLuminance,
|
||||||
size: blob.size,
|
size: blob.size,
|
||||||
circularity: blob.circularity,
|
circularity: blob.circularity,
|
||||||
fillRatio: fillRatio,
|
fillRatio: fillRatio,
|
||||||
@@ -429,12 +524,110 @@ class ImageProcessingService {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Flood fill pour une zone locale
|
||||||
|
_Blob _floodFillLocal(
|
||||||
|
List<List<bool>> mask,
|
||||||
|
List<List<bool>> visited,
|
||||||
|
int startX,
|
||||||
|
int startY,
|
||||||
|
int width,
|
||||||
|
int height,
|
||||||
|
) {
|
||||||
|
final stack = <_Point>[_Point(startX, startY)];
|
||||||
|
final points = <_Point>[];
|
||||||
|
|
||||||
|
int minX = startX, maxX = startX;
|
||||||
|
int minY = startY, maxY = startY;
|
||||||
|
int perimeterCount = 0;
|
||||||
|
|
||||||
|
while (stack.isNotEmpty) {
|
||||||
|
final point = stack.removeLast();
|
||||||
|
final x = point.x;
|
||||||
|
final y = point.y;
|
||||||
|
|
||||||
|
if (x < 0 || x >= width || y < 0 || y >= height) continue;
|
||||||
|
if (visited[y][x] || !mask[y][x]) continue;
|
||||||
|
|
||||||
|
visited[y][x] = true;
|
||||||
|
points.add(point);
|
||||||
|
|
||||||
|
minX = math.min(minX, x);
|
||||||
|
maxX = math.max(maxX, x);
|
||||||
|
minY = math.min(minY, y);
|
||||||
|
maxY = math.max(maxY, y);
|
||||||
|
|
||||||
|
// Check if this is a perimeter pixel
|
||||||
|
bool isPerimeter = false;
|
||||||
|
for (final delta in [[-1, 0], [1, 0], [0, -1], [0, 1]]) {
|
||||||
|
final nx = x + delta[0];
|
||||||
|
final ny = y + delta[1];
|
||||||
|
if (nx < 0 || nx >= width || ny < 0 || ny >= height || !mask[ny][nx]) {
|
||||||
|
isPerimeter = true;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (isPerimeter) perimeterCount++;
|
||||||
|
|
||||||
|
// Add neighbors (4-connectivity)
|
||||||
|
stack.add(_Point(x + 1, y));
|
||||||
|
stack.add(_Point(x - 1, y));
|
||||||
|
stack.add(_Point(x, y + 1));
|
||||||
|
stack.add(_Point(x, y - 1));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate centroid
|
||||||
|
double sumX = 0, sumY = 0;
|
||||||
|
for (final p in points) {
|
||||||
|
sumX += p.x;
|
||||||
|
sumY += p.y;
|
||||||
|
}
|
||||||
|
|
||||||
|
final centerX = points.isNotEmpty ? sumX / points.length : startX.toDouble();
|
||||||
|
final centerY = points.isNotEmpty ? sumY / points.length : startY.toDouble();
|
||||||
|
|
||||||
|
// Calculate bounding box dimensions
|
||||||
|
final blobWidth = (maxX - minX + 1).toDouble();
|
||||||
|
final blobHeight = (maxY - minY + 1).toDouble();
|
||||||
|
|
||||||
|
// Calculate approximate radius based on bounding box
|
||||||
|
final radius = math.max(blobWidth, blobHeight) / 2.0;
|
||||||
|
|
||||||
|
// Calculate circularity
|
||||||
|
final area = points.length.toDouble();
|
||||||
|
final perimeter = perimeterCount.toDouble();
|
||||||
|
final circularity = perimeter > 0
|
||||||
|
? (4 * math.pi * area) / (perimeter * perimeter)
|
||||||
|
: 0.0;
|
||||||
|
|
||||||
|
// Calculate aspect ratio
|
||||||
|
final aspectRatio = blobWidth > blobHeight
|
||||||
|
? blobWidth / blobHeight
|
||||||
|
: blobHeight / blobWidth;
|
||||||
|
|
||||||
|
// Calculate fill ratio
|
||||||
|
final boundingCircleArea = math.pi * radius * radius;
|
||||||
|
final fillRatio = boundingCircleArea > 0 ? (area / boundingCircleArea).clamp(0.0, 1.0) : 0.0;
|
||||||
|
|
||||||
|
return _Blob(
|
||||||
|
x: centerX,
|
||||||
|
y: centerY,
|
||||||
|
radius: radius,
|
||||||
|
size: points.length,
|
||||||
|
circularity: circularity.clamp(0.0, 1.0),
|
||||||
|
aspectRatio: aspectRatio,
|
||||||
|
fillRatio: fillRatio,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
/// Detect impacts based on reference characteristics with tolerance
|
/// Detect impacts based on reference characteristics with tolerance
|
||||||
|
///
|
||||||
|
/// Utilise une approche multi-seuils adaptative pour une meilleure détection
|
||||||
List<DetectedImpact> detectImpactsFromReferences(
|
List<DetectedImpact> detectImpactsFromReferences(
|
||||||
String imagePath,
|
String imagePath,
|
||||||
ImpactCharacteristics characteristics, {
|
ImpactCharacteristics characteristics, {
|
||||||
double tolerance = 2.0, // Number of standard deviations
|
double tolerance = 2.0, // Number of standard deviations
|
||||||
double minCircularity = 0.4,
|
double minCircularity = 0.3,
|
||||||
}) {
|
}) {
|
||||||
try {
|
try {
|
||||||
final file = File(imagePath);
|
final file = File(imagePath);
|
||||||
@@ -445,7 +638,7 @@ class ImageProcessingService {
|
|||||||
// Resize for faster processing
|
// Resize for faster processing
|
||||||
img.Image image;
|
img.Image image;
|
||||||
double scale = 1.0;
|
double scale = 1.0;
|
||||||
final maxDimension = 1000;
|
final maxDimension = 1200; // Augmenté pour plus de précision
|
||||||
if (originalImage.width > maxDimension || originalImage.height > maxDimension) {
|
if (originalImage.width > maxDimension || originalImage.height > maxDimension) {
|
||||||
scale = maxDimension / math.max(originalImage.width, originalImage.height);
|
scale = maxDimension / math.max(originalImage.width, originalImage.height);
|
||||||
image = img.copyResize(
|
image = img.copyResize(
|
||||||
@@ -460,36 +653,83 @@ class ImageProcessingService {
|
|||||||
final grayscale = img.grayscale(image);
|
final grayscale = img.grayscale(image);
|
||||||
final blurred = img.gaussianBlur(grayscale, radius: 2);
|
final blurred = img.gaussianBlur(grayscale, radius: 2);
|
||||||
|
|
||||||
// Use the threshold learned from references
|
// AMÉLIORATION : Utiliser plusieurs seuils autour du seuil appris
|
||||||
final threshold = characteristics.avgDarkThreshold.round();
|
final baseThreshold = characteristics.avgDarkThreshold.round();
|
||||||
|
|
||||||
|
// Générer une plage de seuils plus ciblée
|
||||||
|
final thresholds = <int>[];
|
||||||
|
final thresholdRange = (15 * tolerance).round(); // Plage modérée
|
||||||
|
for (int offset = -thresholdRange; offset <= thresholdRange; offset += 8) {
|
||||||
|
final t = (baseThreshold + offset).clamp(30, 150);
|
||||||
|
if (!thresholds.contains(t)) thresholds.add(t);
|
||||||
|
}
|
||||||
|
|
||||||
// Calculate size range based on learned characteristics
|
// Calculate size range based on learned characteristics
|
||||||
final minSize = (characteristics.avgSize / (tolerance * 2)).clamp(5, 10000).round();
|
// Utiliser la variance mais avec des limites raisonnables
|
||||||
final maxSize = (characteristics.avgSize * tolerance * 2).clamp(10, 10000).round();
|
final sizeVariance = math.max(characteristics.sizeStdDev * tolerance, characteristics.avgSize * 0.4);
|
||||||
|
final minSize = math.max(20, (characteristics.avgSize - sizeVariance).round()); // Minimum 20 pixels
|
||||||
|
final maxSize = math.min(3000, (characteristics.avgSize + sizeVariance * 2).round()); // Maximum 3000 pixels
|
||||||
|
|
||||||
// Calculate minimum fill ratio based on learned characteristics
|
// Calculate minimum circularity - équilibré
|
||||||
// Allow some variance but still filter out hollow shapes
|
final circularityTolerance = 0.2 * tolerance;
|
||||||
final minFillRatio = (characteristics.avgFillRatio - 0.2).clamp(0.3, 0.9);
|
final effectiveMinCircularity = math.max(
|
||||||
|
characteristics.avgCircularity - circularityTolerance,
|
||||||
|
minCircularity,
|
||||||
|
).clamp(0.35, 0.85);
|
||||||
|
|
||||||
// Detect blobs using the learned threshold
|
// Calculate minimum fill ratio - impacts pleins
|
||||||
final impacts = _detectDarkSpots(
|
final minFillRatio = (characteristics.avgFillRatio - 0.2).clamp(0.35, 0.85);
|
||||||
blurred,
|
|
||||||
threshold,
|
print('Detection params: thresholds=$thresholds, size=$minSize-$maxSize, '
|
||||||
minSize,
|
'circ>=$effectiveMinCircularity, fill>=$minFillRatio');
|
||||||
maxSize,
|
|
||||||
minCircularity: math.max(characteristics.avgCircularity - 0.2, minCircularity),
|
// Détecter avec plusieurs seuils et combiner les résultats
|
||||||
minFillRatio: minFillRatio,
|
final allBlobs = <_Blob>[];
|
||||||
);
|
|
||||||
|
for (final threshold in thresholds) {
|
||||||
|
final blobs = _detectDarkSpots(
|
||||||
|
blurred,
|
||||||
|
threshold,
|
||||||
|
minSize,
|
||||||
|
maxSize,
|
||||||
|
minCircularity: effectiveMinCircularity,
|
||||||
|
maxAspectRatio: 2.5, // Plus permissif
|
||||||
|
minFillRatio: minFillRatio,
|
||||||
|
);
|
||||||
|
allBlobs.addAll(blobs);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fusionner les blobs qui se chevauchent (même impact détecté à différents seuils)
|
||||||
|
final mergedBlobs = _mergeOverlappingBlobs(allBlobs);
|
||||||
|
|
||||||
|
// FILTRE POST-DÉTECTION : Garder seulement les blobs similaires aux références
|
||||||
|
// Le filtre est plus ou moins strict selon la tolérance
|
||||||
|
final sizeToleranceFactor = 0.3 + (tolerance - 1) * 0.3; // 0.3 à 1.5 selon tolérance
|
||||||
|
final minSizeRatio = math.max(0.15, 1 / (1 + sizeToleranceFactor * 3));
|
||||||
|
final maxSizeRatio = 1 + sizeToleranceFactor * 4;
|
||||||
|
|
||||||
|
final filteredBlobs = mergedBlobs.where((blob) {
|
||||||
|
// Vérifier la taille par rapport aux caractéristiques apprises
|
||||||
|
final sizeRatio = blob.size / characteristics.avgSize;
|
||||||
|
if (sizeRatio < minSizeRatio || sizeRatio > maxSizeRatio) return false;
|
||||||
|
|
||||||
|
// Vérifier la circularité (légèrement relaxée)
|
||||||
|
if (blob.circularity < effectiveMinCircularity * 0.85) return false;
|
||||||
|
|
||||||
|
// Vérifier le fill ratio
|
||||||
|
if (blob.fillRatio < minFillRatio * 0.9) return false;
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}).toList();
|
||||||
|
|
||||||
|
print('Found ${filteredBlobs.length} impacts after filtering (from ${mergedBlobs.length} merged)');
|
||||||
|
|
||||||
// Convert to relative coordinates
|
// Convert to relative coordinates
|
||||||
final width = originalImage.width.toDouble();
|
return filteredBlobs.map((blob) {
|
||||||
final height = originalImage.height.toDouble();
|
|
||||||
|
|
||||||
return impacts.map((impact) {
|
|
||||||
return DetectedImpact(
|
return DetectedImpact(
|
||||||
x: impact.x / image.width,
|
x: blob.x / image.width,
|
||||||
y: impact.y / image.height,
|
y: blob.y / image.height,
|
||||||
radius: impact.radius / scale,
|
radius: blob.radius / scale,
|
||||||
);
|
);
|
||||||
}).toList();
|
}).toList();
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
@@ -498,6 +738,44 @@ class ImageProcessingService {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Fusionne les blobs qui se chevauchent en gardant le meilleur représentant
|
||||||
|
List<_Blob> _mergeOverlappingBlobs(List<_Blob> blobs) {
|
||||||
|
if (blobs.isEmpty) return [];
|
||||||
|
|
||||||
|
// Trier par score de qualité (circularité * fillRatio)
|
||||||
|
final sortedBlobs = List<_Blob>.from(blobs);
|
||||||
|
sortedBlobs.sort((a, b) {
|
||||||
|
final scoreA = a.circularity * a.fillRatio * a.size;
|
||||||
|
final scoreB = b.circularity * b.fillRatio * b.size;
|
||||||
|
return scoreB.compareTo(scoreA);
|
||||||
|
});
|
||||||
|
|
||||||
|
final merged = <_Blob>[];
|
||||||
|
|
||||||
|
for (final blob in sortedBlobs) {
|
||||||
|
bool shouldAdd = true;
|
||||||
|
|
||||||
|
for (final existing in merged) {
|
||||||
|
final dx = blob.x - existing.x;
|
||||||
|
final dy = blob.y - existing.y;
|
||||||
|
final distance = math.sqrt(dx * dx + dy * dy);
|
||||||
|
final minDist = math.min(blob.radius, existing.radius);
|
||||||
|
|
||||||
|
// Si les centres sont proches, c'est le même impact
|
||||||
|
if (distance < minDist * 1.5) {
|
||||||
|
shouldAdd = false;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (shouldAdd) {
|
||||||
|
merged.add(blob);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return merged;
|
||||||
|
}
|
||||||
|
|
||||||
/// Detect dark spots with adaptive luminance range
|
/// Detect dark spots with adaptive luminance range
|
||||||
List<_Blob> _detectDarkSpotsAdaptive(
|
List<_Blob> _detectDarkSpotsAdaptive(
|
||||||
img.Image image,
|
img.Image image,
|
||||||
|
|||||||
119
lib/services/opencv_impact_detection_service.dart
Normal file
119
lib/services/opencv_impact_detection_service.dart
Normal file
@@ -0,0 +1,119 @@
|
|||||||
|
/// Service de détection d'impacts utilisant OpenCV.
|
||||||
|
///
|
||||||
|
/// NOTE: OpenCV est actuellement désactivé sur Windows en raison de problèmes
|
||||||
|
/// de compilation. Ce fichier contient des stubs qui permettent au code de
|
||||||
|
/// compiler sans OpenCV. Réactiver opencv_dart dans pubspec.yaml et
|
||||||
|
/// décommenter le code ci-dessous quand le support sera corrigé.
|
||||||
|
library;
|
||||||
|
|
||||||
|
// import 'dart:math' as math;
|
||||||
|
// import 'package:opencv_dart/opencv_dart.dart' as cv;
|
||||||
|
|
||||||
|
/// Paramètres de détection d'impacts OpenCV
|
||||||
|
class OpenCVDetectionSettings {
|
||||||
|
/// Seuil Canny bas pour la détection de contours
|
||||||
|
final double cannyThreshold1;
|
||||||
|
|
||||||
|
/// Seuil Canny haut pour la détection de contours
|
||||||
|
final double cannyThreshold2;
|
||||||
|
|
||||||
|
/// Distance minimale entre les centres des cercles détectés
|
||||||
|
final double minDist;
|
||||||
|
|
||||||
|
/// Paramètre 1 de HoughCircles (seuil Canny interne)
|
||||||
|
final double param1;
|
||||||
|
|
||||||
|
/// Paramètre 2 de HoughCircles (seuil d'accumulation)
|
||||||
|
final double param2;
|
||||||
|
|
||||||
|
/// Rayon minimum des cercles en pixels
|
||||||
|
final int minRadius;
|
||||||
|
|
||||||
|
/// Rayon maximum des cercles en pixels
|
||||||
|
final int maxRadius;
|
||||||
|
|
||||||
|
/// Taille du flou gaussien (doit être impair)
|
||||||
|
final int blurSize;
|
||||||
|
|
||||||
|
/// Utiliser la détection de contours en plus de Hough
|
||||||
|
final bool useContourDetection;
|
||||||
|
|
||||||
|
/// Circularité minimale pour la détection par contours (0-1)
|
||||||
|
final double minCircularity;
|
||||||
|
|
||||||
|
/// Surface minimale des contours
|
||||||
|
final double minContourArea;
|
||||||
|
|
||||||
|
/// Surface maximale des contours
|
||||||
|
final double maxContourArea;
|
||||||
|
|
||||||
|
const OpenCVDetectionSettings({
|
||||||
|
this.cannyThreshold1 = 50,
|
||||||
|
this.cannyThreshold2 = 150,
|
||||||
|
this.minDist = 20,
|
||||||
|
this.param1 = 100,
|
||||||
|
this.param2 = 30,
|
||||||
|
this.minRadius = 5,
|
||||||
|
this.maxRadius = 50,
|
||||||
|
this.blurSize = 5,
|
||||||
|
this.useContourDetection = true,
|
||||||
|
this.minCircularity = 0.6,
|
||||||
|
this.minContourArea = 50,
|
||||||
|
this.maxContourArea = 5000,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Résultat de détection d'impact
|
||||||
|
class OpenCVDetectedImpact {
|
||||||
|
/// Position X normalisée (0-1)
|
||||||
|
final double x;
|
||||||
|
|
||||||
|
/// Position Y normalisée (0-1)
|
||||||
|
final double y;
|
||||||
|
|
||||||
|
/// Rayon en pixels
|
||||||
|
final double radius;
|
||||||
|
|
||||||
|
/// Score de confiance (0-1)
|
||||||
|
final double confidence;
|
||||||
|
|
||||||
|
/// Méthode de détection utilisée
|
||||||
|
final String method;
|
||||||
|
|
||||||
|
const OpenCVDetectedImpact({
|
||||||
|
required this.x,
|
||||||
|
required this.y,
|
||||||
|
required this.radius,
|
||||||
|
this.confidence = 1.0,
|
||||||
|
this.method = 'unknown',
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Service de détection d'impacts utilisant OpenCV
|
||||||
|
///
|
||||||
|
/// NOTE: Actuellement désactivé - retourne des listes vides.
|
||||||
|
/// OpenCV n'est pas disponible sur Windows pour le moment.
|
||||||
|
class OpenCVImpactDetectionService {
|
||||||
|
/// Détecte les impacts dans une image en utilisant OpenCV
|
||||||
|
///
|
||||||
|
/// STUB: Retourne une liste vide car OpenCV est désactivé.
|
||||||
|
List<OpenCVDetectedImpact> detectImpacts(
|
||||||
|
String imagePath, {
|
||||||
|
OpenCVDetectionSettings settings = const OpenCVDetectionSettings(),
|
||||||
|
}) {
|
||||||
|
print('OpenCV est désactivé - utilisation de la détection classique recommandée');
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Détecte les impacts en utilisant une image de référence
|
||||||
|
///
|
||||||
|
/// STUB: Retourne une liste vide car OpenCV est désactivé.
|
||||||
|
List<OpenCVDetectedImpact> detectFromReferences(
|
||||||
|
String imagePath,
|
||||||
|
List<({double x, double y})> referencePoints, {
|
||||||
|
double tolerance = 2.0,
|
||||||
|
}) {
|
||||||
|
print('OpenCV est désactivé - utilisation de la détection par références classique recommandée');
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,8 +1,10 @@
|
|||||||
import 'dart:math' as math;
|
import 'dart:math' as math;
|
||||||
import '../data/models/target_type.dart';
|
import '../data/models/target_type.dart';
|
||||||
import 'image_processing_service.dart';
|
import 'image_processing_service.dart';
|
||||||
|
import 'opencv_impact_detection_service.dart';
|
||||||
|
|
||||||
export 'image_processing_service.dart' show ImpactDetectionSettings, ReferenceImpact, ImpactCharacteristics;
|
export 'image_processing_service.dart' show ImpactDetectionSettings, ReferenceImpact, ImpactCharacteristics;
|
||||||
|
export 'opencv_impact_detection_service.dart' show OpenCVDetectionSettings, OpenCVDetectedImpact;
|
||||||
|
|
||||||
class TargetDetectionResult {
|
class TargetDetectionResult {
|
||||||
final double centerX; // Relative (0-1)
|
final double centerX; // Relative (0-1)
|
||||||
@@ -49,10 +51,13 @@ class DetectedImpactResult {
|
|||||||
|
|
||||||
class TargetDetectionService {
|
class TargetDetectionService {
|
||||||
final ImageProcessingService _imageProcessingService;
|
final ImageProcessingService _imageProcessingService;
|
||||||
|
final OpenCVImpactDetectionService _opencvService;
|
||||||
|
|
||||||
TargetDetectionService({
|
TargetDetectionService({
|
||||||
ImageProcessingService? imageProcessingService,
|
ImageProcessingService? imageProcessingService,
|
||||||
}) : _imageProcessingService = imageProcessingService ?? ImageProcessingService();
|
OpenCVImpactDetectionService? opencvService,
|
||||||
|
}) : _imageProcessingService = imageProcessingService ?? ImageProcessingService(),
|
||||||
|
_opencvService = opencvService ?? OpenCVImpactDetectionService();
|
||||||
|
|
||||||
/// Detect target and impacts from an image file
|
/// Detect target and impacts from an image file
|
||||||
TargetDetectionResult detectTarget(
|
TargetDetectionResult detectTarget(
|
||||||
@@ -254,4 +259,88 @@ class TargetDetectionService {
|
|||||||
return [];
|
return [];
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Détecte les impacts en utilisant OpenCV (Hough Circles + Contours)
|
||||||
|
///
|
||||||
|
/// Cette méthode utilise les algorithmes OpenCV pour une détection plus robuste:
|
||||||
|
/// - Transformation de Hough pour détecter les cercles
|
||||||
|
/// - Analyse de contours avec filtrage par circularité
|
||||||
|
List<DetectedImpactResult> detectImpactsWithOpenCV(
|
||||||
|
String imagePath,
|
||||||
|
TargetType targetType,
|
||||||
|
double centerX,
|
||||||
|
double centerY,
|
||||||
|
double radius,
|
||||||
|
int ringCount, {
|
||||||
|
OpenCVDetectionSettings? settings,
|
||||||
|
}) {
|
||||||
|
try {
|
||||||
|
final impacts = _opencvService.detectImpacts(
|
||||||
|
imagePath,
|
||||||
|
settings: settings ?? const OpenCVDetectionSettings(),
|
||||||
|
);
|
||||||
|
|
||||||
|
return impacts.map((impact) {
|
||||||
|
final score = targetType == TargetType.concentric
|
||||||
|
? _calculateConcentricScoreWithRings(
|
||||||
|
impact.x, impact.y, centerX, centerY, radius, ringCount)
|
||||||
|
: _calculateSilhouetteScore(impact.x, impact.y, centerX, centerY);
|
||||||
|
|
||||||
|
return DetectedImpactResult(
|
||||||
|
x: impact.x,
|
||||||
|
y: impact.y,
|
||||||
|
radius: impact.radius,
|
||||||
|
suggestedScore: score,
|
||||||
|
);
|
||||||
|
}).toList();
|
||||||
|
} catch (e) {
|
||||||
|
print('Erreur détection OpenCV: $e');
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Détecte les impacts avec OpenCV en utilisant des références
|
||||||
|
///
|
||||||
|
/// Analyse les impacts de référence pour apprendre leurs caractéristiques
|
||||||
|
/// puis détecte les impacts similaires dans l'image.
|
||||||
|
List<DetectedImpactResult> detectImpactsWithOpenCVFromReferences(
|
||||||
|
String imagePath,
|
||||||
|
TargetType targetType,
|
||||||
|
double centerX,
|
||||||
|
double centerY,
|
||||||
|
double radius,
|
||||||
|
int ringCount,
|
||||||
|
List<ReferenceImpact> references, {
|
||||||
|
double tolerance = 2.0,
|
||||||
|
}) {
|
||||||
|
try {
|
||||||
|
// Convertir les références au format OpenCV
|
||||||
|
final refPoints = references
|
||||||
|
.map((r) => (x: r.x, y: r.y))
|
||||||
|
.toList();
|
||||||
|
|
||||||
|
final impacts = _opencvService.detectFromReferences(
|
||||||
|
imagePath,
|
||||||
|
refPoints,
|
||||||
|
tolerance: tolerance,
|
||||||
|
);
|
||||||
|
|
||||||
|
return impacts.map((impact) {
|
||||||
|
final score = targetType == TargetType.concentric
|
||||||
|
? _calculateConcentricScoreWithRings(
|
||||||
|
impact.x, impact.y, centerX, centerY, radius, ringCount)
|
||||||
|
: _calculateSilhouetteScore(impact.x, impact.y, centerX, centerY);
|
||||||
|
|
||||||
|
return DetectedImpactResult(
|
||||||
|
x: impact.x,
|
||||||
|
y: impact.y,
|
||||||
|
radius: impact.radius,
|
||||||
|
suggestedScore: score,
|
||||||
|
);
|
||||||
|
}).toList();
|
||||||
|
} catch (e) {
|
||||||
|
print('Erreur détection OpenCV depuis références: $e');
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -35,7 +35,7 @@ dependencies:
|
|||||||
# Use with the CupertinoIcons class for iOS style icons.
|
# Use with the CupertinoIcons class for iOS style icons.
|
||||||
cupertino_icons: ^1.0.8
|
cupertino_icons: ^1.0.8
|
||||||
|
|
||||||
# Image processing with OpenCV (disabled for now due to build issues)
|
# Image processing with OpenCV (désactivé temporairement - problèmes de build Windows)
|
||||||
# opencv_dart: ^2.1.0
|
# opencv_dart: ^2.1.0
|
||||||
|
|
||||||
# Image capture from camera/gallery
|
# Image capture from camera/gallery
|
||||||
|
|||||||
Reference in New Issue
Block a user