Compare commits
16 Commits
f1a8eefdc3
...
v.0.0.3
| Author | SHA1 | Date | |
|---|---|---|---|
| e32833e366 | |||
| d4cb179fde | |||
| 2e81f4b69e | |||
| de677aad7e | |||
| 723900b860 | |||
| f78184d2cd | |||
| 7e55c52ae7 | |||
| 049f77ac6d | |||
| 91fcd931e8 | |||
| 9ccb149dda | |||
| 972474750f | |||
| 2f69ff4ecf | |||
| db7160b094 | |||
| f6d493eb4e | |||
| 2545659f12 | |||
| 334332bc78 |
7
.claude/settings.json
Normal file
7
.claude/settings.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Bash(flutter analyze:*)"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -3,7 +3,14 @@
|
||||
"allow": [
|
||||
"Bash(flutter clean:*)",
|
||||
"Bash(flutter pub get:*)",
|
||||
"Bash(flutter run:*)"
|
||||
"Bash(flutter run:*)",
|
||||
"Bash(cmake:*)",
|
||||
"Bash(where:*)",
|
||||
"Bash(winget search:*)",
|
||||
"Bash(winget install:*)",
|
||||
"Bash(\"/c/Program Files \\(x86\\)/Microsoft Visual Studio/Installer/vs_installer.exe\" modify --installPath \"C:\\\\Program Files \\(x86\\)\\\\Microsoft Visual Studio\\\\2022\\\\BuildTools\" --add Microsoft.VisualStudio.Workload.VCTools --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64 --add Microsoft.VisualStudio.Component.Windows11SDK.22621 --passive --wait)",
|
||||
"Bash(cmd //c \"\"\"C:\\\\Program Files\\\\Microsoft Visual Studio\\\\18\\\\Community\\\\Common7\\\\Tools\\\\VsDevCmd.bat\"\" && flutter run -d windows\")",
|
||||
"Bash(flutter doctor:*)"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
25
CHANGELOG.md
Normal file
25
CHANGELOG.md
Normal file
@@ -0,0 +1,25 @@
|
||||
# Changelog
|
||||
|
||||
## [v0.0.1] - 2026-01-29
|
||||
|
||||
### Ajouté
|
||||
- **Interface d'Analyse** :
|
||||
- Implémentation d'un bouton de sauvegarde "Morphing" : le bouton flottant se transforme en un bouton large en bas de page lors du défilement.
|
||||
- Ajout de la gestion du défilement et de l'espacement pour une meilleure ergonomie.
|
||||
- Visualisation des impacts et statistiques de groupement.
|
||||
- **Support Desktop (Windows)** :
|
||||
- Configuration de la base de données SQLite pour fonctionner sur Windows via `sqflite_common_ffi`.
|
||||
- Initialisation conditionnelle selon la plateforme.
|
||||
|
||||
### Corrigé
|
||||
- **Crash Windows** : Résolution du plantage dû à l'initialisation manquante de la factory de base de données FFI.
|
||||
- **Dépendances** : Fixation de la version de `sqflite_common_ffi` à `2.3.3` pour contourner un problème de cache/corruption avec la version `2.4.0+2`.
|
||||
- **UI/UX** :
|
||||
- Correction des débordements de texte ("zebra stripes") dans le bouton de sauvegarde lors de l'animation grâce à `FittedBox`.
|
||||
- Optimisation de l'affichage du titre "Groupement" dans les statistiques pour éviter les dépassements sur petits écrans.
|
||||
- Nettoyage des appels redondants (`super.initState`) et correction de la structure des widgets (`Stack` mal fermé).
|
||||
|
||||
### Historique des Commits
|
||||
- `db7160b` - +désactivation (2026-01-29)
|
||||
- `f1a8eef` - ajout correctif (2026-01-28)
|
||||
- `031d4a4` - premier app version beta (2026-01-18)
|
||||
40
README.md
40
README.md
@@ -1,17 +1,35 @@
|
||||
# bully
|
||||
# Bully - Analyseur de Cible
|
||||
|
||||
A new Flutter project.
|
||||
Application Flutter multiplateforme pour l'analyse et le suivi de vos séances de tir.
|
||||
|
||||
## Getting Started
|
||||
## Fonctionnalités Principales
|
||||
|
||||
This project is a starting point for a Flutter application.
|
||||
* **Capture et Analyse** : Prenez une photo de votre cible et analysez vos impacts.
|
||||
* **Détection Automatique** : Utilise des algorithmes pour détecter automatiquement les impacts de balle sur la cible.
|
||||
* **Calibration** : Outils de calibration précis pour définir la taille et le centre de la cible, assurant des mesures exactes.
|
||||
* **Statistiques Détaillées** :
|
||||
* Calcul du score total.
|
||||
* Analyse du groupement (H+L, diamètre moyen).
|
||||
* Visualisation graphique de la dispersion.
|
||||
* **Historique** : Sauvegardez vos sessions avec des notes et consultez votre progression au fil du temps.
|
||||
* **Interface Intuitive** : Design moderne et fluide, avec un bouton de sauvegarde dynamique qui s'adapte à votre navigation.
|
||||
|
||||
A few resources to get you started if this is your first Flutter project:
|
||||
## Détails Techniques
|
||||
|
||||
- [Learn Flutter](https://docs.flutter.dev/get-started/learn-flutter)
|
||||
- [Write your first Flutter app](https://docs.flutter.dev/get-started/codelab)
|
||||
- [Flutter learning resources](https://docs.flutter.dev/reference/learning-resources)
|
||||
* **Framework** : Flutter (Compatible Android, iOS, Windows, Linux, macOS).
|
||||
* **Base de Données** : SQLite (via `sqflite` et `sqflite_common_ffi` pour le support Desktop).
|
||||
* **Graphiques** : `fl_chart` pour la visualisation des données.
|
||||
* **Architecture** : Provider pour la gestion d'état.
|
||||
|
||||
For help getting started with Flutter development, view the
|
||||
[online documentation](https://docs.flutter.dev/), which offers tutorials,
|
||||
samples, guidance on mobile development, and a full API reference.
|
||||
## Installation
|
||||
|
||||
1. Assurez-vous d'avoir Flutter installé.
|
||||
2. Clonez le dépôt.
|
||||
3. Installez les dépendances :
|
||||
```bash
|
||||
flutter pub get
|
||||
```
|
||||
4. Lancez l'application :
|
||||
```bash
|
||||
flutter run
|
||||
```
|
||||
|
||||
31
analyze_log.txt
Normal file
31
analyze_log.txt
Normal file
@@ -0,0 +1,31 @@
|
||||
Analyzing bully...
|
||||
|
||||
info - Statements in an if should be enclosed in a block - lib\features\analysis\analysis_screen.dart:122:17 - curly_braces_in_flow_control_structures
|
||||
info - 'withOpacity' is deprecated and shouldn't be used. Use .withValues() to avoid precision loss - lib\features\analysis\analysis_screen.dart:650:51 - deprecated_member_use
|
||||
warning - The declaration '_showAddShotHint' isn't referenced - lib\features\analysis\analysis_screen.dart:1083:8 - unused_element
|
||||
warning - The declaration '_showAutoDetectDialog' isn't referenced - lib\features\analysis\analysis_screen.dart:1120:8 - unused_element
|
||||
warning - Unused import: 'widgets/target_type_selector.dart' - lib\features\capture\capture_screen.dart:16:8 - unused_import
|
||||
info - The private field _selectedType could be 'final' - lib\features\capture\capture_screen.dart:28:14 - prefer_final_fields
|
||||
info - 'scale' is deprecated and shouldn't be used. Use scaleByVector3, scaleByVector4, or scaleByDouble instead - lib\features\crop\crop_screen.dart:141:25 - deprecated_member_use
|
||||
info - The import of 'package:flutter/foundation.dart' is unnecessary because all of the used elements are also provided by the import of 'package:flutter/material.dart' - lib\features\statistics\statistics_screen.dart:8:8 - unnecessary_import
|
||||
warning - The declaration '_buildLegendItem' isn't referenced - lib\features\statistics\statistics_screen.dart:309:10 - unused_element
|
||||
info - Unnecessary use of string interpolation - lib\features\statistics\statistics_screen.dart:408:15 - unnecessary_string_interpolations
|
||||
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:192:7 - avoid_print
|
||||
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:239:7 - avoid_print
|
||||
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:246:9 - avoid_print
|
||||
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:278:9 - avoid_print
|
||||
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:289:11 - avoid_print
|
||||
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:292:11 - avoid_print
|
||||
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:297:9 - avoid_print
|
||||
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:332:7 - avoid_print
|
||||
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:336:7 - avoid_print
|
||||
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:683:7 - avoid_print
|
||||
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:725:7 - avoid_print
|
||||
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:736:7 - avoid_print
|
||||
warning - The declaration '_detectDarkSpotsAdaptive' isn't referenced - lib\services\image_processing_service.dart:780:15 - unused_element
|
||||
info - Don't invoke 'print' in production code - lib\services\opencv_impact_detection_service.dart:104:5 - avoid_print
|
||||
info - Don't invoke 'print' in production code - lib\services\opencv_impact_detection_service.dart:116:5 - avoid_print
|
||||
info - Don't invoke 'print' in production code - lib\services\target_detection_service.dart:297:7 - avoid_print
|
||||
info - Don't invoke 'print' in production code - lib\services\target_detection_service.dart:342:7 - avoid_print
|
||||
|
||||
27 issues found. (ran in 1.9s)
|
||||
BIN
analyze_opencv.txt
Normal file
BIN
analyze_opencv.txt
Normal file
Binary file not shown.
@@ -1,4 +1,6 @@
|
||||
<manifest xmlns:android="http://schemas.android.com/apk/res/android">
|
||||
<uses-permission android:name="android.permission.CAMERA" />
|
||||
|
||||
<application
|
||||
android:label="bully"
|
||||
android:name="${applicationName}"
|
||||
|
||||
20
build_log.txt
Normal file
20
build_log.txt
Normal file
@@ -0,0 +1,20 @@
|
||||
Running Gradle task 'assembleDebug'...
|
||||
|
||||
FAILURE: Build failed with an exception.
|
||||
|
||||
* What went wrong:
|
||||
Execution failed for task ':app:processDebugResources'.
|
||||
> A failure occurred while executing com.android.build.gradle.internal.res.LinkApplicationAndroidResourcesTask$TaskAction
|
||||
> Android resource linking failed
|
||||
ERROR: C:\Users\streaper2\Documents\00 - projet\bully\build\cunning_document_scanner\intermediates\merged_manifest\debug\processDebugManifest\AndroidManifest.xml:9:5-65: AAPT: error: unexpected element <uses-permission> found in <manifest><application>.
|
||||
|
||||
|
||||
* Try:
|
||||
> Run with --stacktrace option to get the stack trace.
|
||||
> Run with --info or --debug option to get more log output.
|
||||
> Run with --scan to get full insights.
|
||||
> Get more help at https://help.gradle.org.
|
||||
|
||||
BUILD FAILED in 5s
|
||||
Running Gradle task 'assembleDebug'... 5,4s
|
||||
Gradle task assembleDebug failed with exit code 1
|
||||
3
devtools_options.yaml
Normal file
3
devtools_options.yaml
Normal file
@@ -0,0 +1,3 @@
|
||||
description: This file stores settings for Dart & Flutter DevTools.
|
||||
documentation: https://docs.flutter.dev/tools/devtools/extensions#configure-extension-enablement-states
|
||||
extensions:
|
||||
26
docs/README.md
Normal file
26
docs/README.md
Normal file
@@ -0,0 +1,26 @@
|
||||
# Documentation du Projet Bully
|
||||
|
||||
Bienvenue dans la documentation développeur de l'application **Bully**.
|
||||
|
||||
Ce projet est une application Flutter d'analyse de cibles de tir (Impact Detection).
|
||||
|
||||
## Architecture
|
||||
|
||||
Le code source est organisé dans le dossier `lib/` selon les couches suivantes :
|
||||
|
||||
- **Features (`lib/features`)** : Contient les écrans et la logique UI (Vues/Pages). C'est ici que réside l'interface utilisateur.
|
||||
- **Services (`lib/services`)** : Services "métier" et utilitaires (traitement d'image, calculs, etc.). Indépendant de l'UI.
|
||||
- **Data (`lib/data`)** : Gestion des données (Modèles, Base de données locale, Repositories).
|
||||
|
||||
## Sections de la Documentation
|
||||
|
||||
Pour plus de détails sur chaque partie, consultez les sections dédiées :
|
||||
|
||||
- 🏗️ **[Services (Logique Métier)](services/README.md)** : Documentation des services comme le traitement d'image et le calcul de score.
|
||||
- 📱 **[Vues & Features (UI)](features/README.md)** : Documentation des écrans principaux (ex: Analyse).
|
||||
- 💾 **[Base de Données & Modèles](data/README.md)** : Structure des données et persistance.
|
||||
|
||||
## Pour commencer
|
||||
|
||||
1. Assurez-vous d'avoir Flutter installé.
|
||||
2. Lancez `flutter run` pour démarrer l'application.
|
||||
17
docs/data/README.md
Normal file
17
docs/data/README.md
Normal file
@@ -0,0 +1,17 @@
|
||||
# Data & Persistance
|
||||
|
||||
Cette couche gère la sauvegarde et la récupération des données.
|
||||
|
||||
## Base de Données
|
||||
L'application utilise une base de données locale (probablement SQLite/Drift ou Hive, à vérifier dans `lib/data/database`).
|
||||
|
||||
## Modèles (`lib/data/models`)
|
||||
Les classes représentant les objets métier persistés.
|
||||
|
||||
Exemples probables :
|
||||
- `Session` : Une session de tir.
|
||||
- `Impact` : Un impact de balle sur la cible.
|
||||
- `Target` : Configuration d'une cible.
|
||||
|
||||
## Repositories (`lib/data/repositories`)
|
||||
Le pattern Repository est utilisé pour abstraire la source de données (DB locale, API distante, etc.) du reste de l'application.
|
||||
17
docs/features/README.md
Normal file
17
docs/features/README.md
Normal file
@@ -0,0 +1,17 @@
|
||||
# Features & Vues
|
||||
|
||||
Cette section documente les écrans principaux de l'application et leur organisation.
|
||||
|
||||
## Écrans Principaux
|
||||
|
||||
### Analysis (`lib/features/analysis`)
|
||||
C'est le cœur de l'application. Il permet à l'utilisateur de prendre une photo ou choisir une image pour analyser les impacts.
|
||||
|
||||
- **AnalysisScreen** (`analysis_screen.dart`): L'écran principal qui orchestre la capture et l'affichage des résultats.
|
||||
- **AnalysisProvider** (`analysis_provider.dart`): Gestionnaire d'état (State Management) pour cet écran. Il fait le pont entre la vue et les services.
|
||||
|
||||
## Structure d'une Feature
|
||||
Chaque feature est généralement composée de :
|
||||
- `_screen.dart` : Le Widget de la page.
|
||||
- `_provider.dart` : La logique d'état (ChangeNotifier, Bloc, etc.).
|
||||
- `widgets/` : Widgets spécifiques à cette feature.
|
||||
20
docs/services/README.md
Normal file
20
docs/services/README.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# Services
|
||||
|
||||
Les services contiennent la logique métier de l'application, isolée de l'interface utilisateur.
|
||||
|
||||
## Liste des Services Principaux
|
||||
|
||||
| Service | Description | Fichier |
|
||||
| :--- | :--- | :--- |
|
||||
| **ImageProcessingService** | Gère le traitement lourd des images (filtres, détection). | `lib/services/image_processing_service.dart` |
|
||||
| **DistortionCorrection** | Corrige la distorsion de perspective des cibles. | `lib/services/distortion_correction_service.dart` |
|
||||
| **ScoreCalculator** | Calcule le score en fonction des impacts détectés. | `lib/services/score_calculator_service.dart` |
|
||||
| **StatisticsService** | Génère des statistiques sur les sessions de tir. | `lib/services/statistics_service.dart` |
|
||||
|
||||
## Exemple d'utilisation (Fictif)
|
||||
|
||||
```dart
|
||||
// Exemple d'appel au service de calcul de score
|
||||
final calculator = ScoreCalculatorService();
|
||||
final score = calculator.calculate(impacts);
|
||||
```
|
||||
@@ -2,6 +2,8 @@
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>NSCameraUsageDescription</key>
|
||||
<string>This app needs camera access to scan documents</string>
|
||||
<key>CADisableMinimumFrameDurationOnPhone</key>
|
||||
<true/>
|
||||
<key>CFBundleDevelopmentRegion</key>
|
||||
|
||||
@@ -17,6 +17,7 @@ import '../../services/target_detection_service.dart';
|
||||
import '../../services/score_calculator_service.dart';
|
||||
import '../../services/grouping_analyzer_service.dart';
|
||||
import '../../services/distortion_correction_service.dart';
|
||||
import '../../services/opencv_target_service.dart';
|
||||
|
||||
enum AnalysisState { initial, loading, success, error }
|
||||
|
||||
@@ -26,6 +27,7 @@ class AnalysisProvider extends ChangeNotifier {
|
||||
final GroupingAnalyzerService _groupingAnalyzerService;
|
||||
final SessionRepository _sessionRepository;
|
||||
final DistortionCorrectionService _distortionService;
|
||||
final OpenCVTargetService _opencvTargetService;
|
||||
final Uuid _uuid = const Uuid();
|
||||
|
||||
AnalysisProvider({
|
||||
@@ -34,11 +36,13 @@ class AnalysisProvider extends ChangeNotifier {
|
||||
required GroupingAnalyzerService groupingAnalyzerService,
|
||||
required SessionRepository sessionRepository,
|
||||
DistortionCorrectionService? distortionService,
|
||||
}) : _detectionService = detectionService,
|
||||
_scoreCalculatorService = scoreCalculatorService,
|
||||
_groupingAnalyzerService = groupingAnalyzerService,
|
||||
_sessionRepository = sessionRepository,
|
||||
_distortionService = distortionService ?? DistortionCorrectionService();
|
||||
OpenCVTargetService? opencvTargetService,
|
||||
}) : _detectionService = detectionService,
|
||||
_scoreCalculatorService = scoreCalculatorService,
|
||||
_groupingAnalyzerService = groupingAnalyzerService,
|
||||
_sessionRepository = sessionRepository,
|
||||
_distortionService = distortionService ?? DistortionCorrectionService(),
|
||||
_opencvTargetService = opencvTargetService ?? OpenCVTargetService();
|
||||
|
||||
AnalysisState _state = AnalysisState.initial;
|
||||
String? _errorMessage;
|
||||
@@ -49,6 +53,7 @@ class AnalysisProvider extends ChangeNotifier {
|
||||
double _targetCenterX = 0.5;
|
||||
double _targetCenterY = 0.5;
|
||||
double _targetRadius = 0.4;
|
||||
double _targetInnerRadius = 0.04;
|
||||
int _ringCount = 10;
|
||||
List<double>? _ringRadii; // Individual ring radii multipliers
|
||||
double _imageAspectRatio = 1.0; // width / height
|
||||
@@ -79,8 +84,10 @@ class AnalysisProvider extends ChangeNotifier {
|
||||
double get targetCenterX => _targetCenterX;
|
||||
double get targetCenterY => _targetCenterY;
|
||||
double get targetRadius => _targetRadius;
|
||||
double get targetInnerRadius => _targetInnerRadius;
|
||||
int get ringCount => _ringCount;
|
||||
List<double>? get ringRadii => _ringRadii != null ? List.unmodifiable(_ringRadii!) : null;
|
||||
List<double>? get ringRadii =>
|
||||
_ringRadii != null ? List.unmodifiable(_ringRadii!) : null;
|
||||
double get imageAspectRatio => _imageAspectRatio;
|
||||
List<Shot> get shots => List.unmodifiable(_shots);
|
||||
ScoreResult? get scoreResult => _scoreResult;
|
||||
@@ -97,13 +104,22 @@ class AnalysisProvider extends ChangeNotifier {
|
||||
DistortionParameters? get distortionParams => _distortionParams;
|
||||
String? get correctedImagePath => _correctedImagePath;
|
||||
bool get hasDistortion => _distortionParams?.needsCorrection ?? false;
|
||||
|
||||
/// Retourne le chemin de l'image à afficher (corrigée si activée, originale sinon)
|
||||
String? get displayImagePath => _distortionCorrectionEnabled && _correctedImagePath != null
|
||||
String? get displayImagePath =>
|
||||
_distortionCorrectionEnabled && _correctedImagePath != null
|
||||
? _correctedImagePath
|
||||
: _imagePath;
|
||||
|
||||
/// Analyze an image
|
||||
Future<void> analyzeImage(String imagePath, TargetType targetType) async {
|
||||
///
|
||||
/// [autoAnalyze] determines if we should run automatic detection immediately.
|
||||
/// If false, only the image is loaded and default target parameters are set.
|
||||
Future<void> analyzeImage(
|
||||
String imagePath,
|
||||
TargetType targetType, {
|
||||
bool autoAnalyze = true,
|
||||
}) async {
|
||||
_state = AnalysisState.loading;
|
||||
_imagePath = imagePath;
|
||||
_targetType = targetType;
|
||||
@@ -119,6 +135,21 @@ class AnalysisProvider extends ChangeNotifier {
|
||||
_imageAspectRatio = frame.image.width / frame.image.height;
|
||||
frame.image.dispose();
|
||||
|
||||
if (!autoAnalyze) {
|
||||
// Just setup default values without running detection
|
||||
_targetCenterX = 0.5;
|
||||
_targetCenterY = 0.5;
|
||||
_targetRadius = 0.4;
|
||||
_targetInnerRadius = 0.04;
|
||||
|
||||
// Initialize empty shots list
|
||||
_shots = [];
|
||||
|
||||
_state = AnalysisState.success;
|
||||
notifyListeners();
|
||||
return;
|
||||
}
|
||||
|
||||
// Detect target and impacts
|
||||
final result = _detectionService.detectTarget(imagePath, targetType);
|
||||
|
||||
@@ -132,6 +163,7 @@ class AnalysisProvider extends ChangeNotifier {
|
||||
_targetCenterX = result.centerX;
|
||||
_targetCenterY = result.centerY;
|
||||
_targetRadius = result.radius;
|
||||
_targetInnerRadius = result.radius * 0.1;
|
||||
|
||||
// Create shots from detected impacts
|
||||
_shots = result.impacts.map((impact) {
|
||||
@@ -162,13 +194,7 @@ class AnalysisProvider extends ChangeNotifier {
|
||||
/// Add a manual shot
|
||||
void addShot(double x, double y) {
|
||||
final score = _calculateShotScore(x, y);
|
||||
final shot = Shot(
|
||||
id: _uuid.v4(),
|
||||
x: x,
|
||||
y: y,
|
||||
score: score,
|
||||
sessionId: '',
|
||||
);
|
||||
final shot = Shot(id: _uuid.v4(), x: x, y: y, score: score, sessionId: '');
|
||||
|
||||
_shots.add(shot);
|
||||
_recalculateScores();
|
||||
@@ -190,11 +216,7 @@ class AnalysisProvider extends ChangeNotifier {
|
||||
if (index == -1) return;
|
||||
|
||||
final newScore = _calculateShotScore(newX, newY);
|
||||
_shots[index] = _shots[index].copyWith(
|
||||
x: newX,
|
||||
y: newY,
|
||||
score: newScore,
|
||||
);
|
||||
_shots[index] = _shots[index].copyWith(x: newX, y: newY, score: newScore);
|
||||
|
||||
_recalculateScores();
|
||||
_recalculateGrouping();
|
||||
@@ -254,16 +276,137 @@ class AnalysisProvider extends ChangeNotifier {
|
||||
return detectedImpacts.length;
|
||||
}
|
||||
|
||||
/// Auto-detect impacts using OpenCV (Hough Circles + Contours)
|
||||
///
|
||||
/// NOTE: OpenCV est actuellement désactivé sur Windows en raison de problèmes
|
||||
/// de compilation. Cette méthode retourne 0 (aucun impact détecté).
|
||||
/// Utiliser autoDetectImpacts() à la place.
|
||||
///
|
||||
/// Utilise les algorithmes OpenCV pour une détection plus robuste:
|
||||
/// - Transformation de Hough pour détecter les cercles
|
||||
/// - Analyse de contours avec filtrage par circularité
|
||||
Future<int> autoDetectImpactsWithOpenCV({
|
||||
double cannyThreshold1 = 50,
|
||||
double cannyThreshold2 = 150,
|
||||
double minDist = 20,
|
||||
double param1 = 100,
|
||||
double param2 = 30,
|
||||
int minRadius = 5,
|
||||
int maxRadius = 50,
|
||||
int blurSize = 5,
|
||||
bool useContourDetection = true,
|
||||
double minCircularity = 0.6,
|
||||
double minContourArea = 50,
|
||||
double maxContourArea = 5000,
|
||||
bool clearExisting = false,
|
||||
}) async {
|
||||
if (_imagePath == null || _targetType == null) return 0;
|
||||
|
||||
final settings = OpenCVDetectionSettings(
|
||||
cannyThreshold1: cannyThreshold1,
|
||||
cannyThreshold2: cannyThreshold2,
|
||||
minDist: minDist,
|
||||
param1: param1,
|
||||
param2: param2,
|
||||
minRadius: minRadius,
|
||||
maxRadius: maxRadius,
|
||||
blurSize: blurSize,
|
||||
useContourDetection: useContourDetection,
|
||||
minCircularity: minCircularity,
|
||||
minContourArea: minContourArea,
|
||||
maxContourArea: maxContourArea,
|
||||
);
|
||||
|
||||
final detectedImpacts = _detectionService.detectImpactsWithOpenCV(
|
||||
_imagePath!,
|
||||
_targetType!,
|
||||
_targetCenterX,
|
||||
_targetCenterY,
|
||||
_targetRadius,
|
||||
_ringCount,
|
||||
settings: settings,
|
||||
);
|
||||
|
||||
if (clearExisting) {
|
||||
_shots.clear();
|
||||
}
|
||||
|
||||
// Add detected impacts as shots
|
||||
for (final impact in detectedImpacts) {
|
||||
final score = _calculateShotScore(impact.x, impact.y);
|
||||
final shot = Shot(
|
||||
id: _uuid.v4(),
|
||||
x: impact.x,
|
||||
y: impact.y,
|
||||
score: score,
|
||||
sessionId: '',
|
||||
);
|
||||
_shots.add(shot);
|
||||
}
|
||||
|
||||
_recalculateScores();
|
||||
_recalculateGrouping();
|
||||
notifyListeners();
|
||||
|
||||
return detectedImpacts.length;
|
||||
}
|
||||
|
||||
/// Detect impacts with OpenCV using reference points
|
||||
Future<int> detectFromReferencesWithOpenCV({
|
||||
double tolerance = 2.0,
|
||||
bool clearExisting = false,
|
||||
}) async {
|
||||
if (_imagePath == null ||
|
||||
_targetType == null ||
|
||||
_referenceImpacts.length < 2) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Convertir les références
|
||||
final references = _referenceImpacts
|
||||
.map((shot) => ReferenceImpact(x: shot.x, y: shot.y))
|
||||
.toList();
|
||||
|
||||
final detectedImpacts = _detectionService
|
||||
.detectImpactsWithOpenCVFromReferences(
|
||||
_imagePath!,
|
||||
_targetType!,
|
||||
_targetCenterX,
|
||||
_targetCenterY,
|
||||
_targetRadius,
|
||||
_ringCount,
|
||||
references,
|
||||
tolerance: tolerance,
|
||||
);
|
||||
|
||||
if (clearExisting) {
|
||||
_shots.clear();
|
||||
}
|
||||
|
||||
// Add detected impacts as shots
|
||||
for (final impact in detectedImpacts) {
|
||||
final score = _calculateShotScore(impact.x, impact.y);
|
||||
final shot = Shot(
|
||||
id: _uuid.v4(),
|
||||
x: impact.x,
|
||||
y: impact.y,
|
||||
score: score,
|
||||
sessionId: '',
|
||||
);
|
||||
_shots.add(shot);
|
||||
}
|
||||
|
||||
_recalculateScores();
|
||||
_recalculateGrouping();
|
||||
notifyListeners();
|
||||
|
||||
return detectedImpacts.length;
|
||||
}
|
||||
|
||||
/// Add a reference impact for calibrated detection
|
||||
void addReferenceImpact(double x, double y) {
|
||||
final score = _calculateShotScore(x, y);
|
||||
final shot = Shot(
|
||||
id: _uuid.v4(),
|
||||
x: x,
|
||||
y: y,
|
||||
score: score,
|
||||
sessionId: '',
|
||||
);
|
||||
final shot = Shot(id: _uuid.v4(), x: x, y: y, score: score, sessionId: '');
|
||||
_referenceImpacts.add(shot);
|
||||
notifyListeners();
|
||||
}
|
||||
@@ -304,7 +447,9 @@ class AnalysisProvider extends ChangeNotifier {
|
||||
double tolerance = 2.0,
|
||||
bool clearExisting = false,
|
||||
}) async {
|
||||
if (_imagePath == null || _targetType == null || _learnedCharacteristics == null) {
|
||||
if (_imagePath == null ||
|
||||
_targetType == null ||
|
||||
_learnedCharacteristics == null) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -344,9 +489,17 @@ class AnalysisProvider extends ChangeNotifier {
|
||||
}
|
||||
|
||||
/// Adjust target position
|
||||
void adjustTargetPosition(double centerX, double centerY, double radius, {int? ringCount, List<double>? ringRadii}) {
|
||||
void adjustTargetPosition(
|
||||
double centerX,
|
||||
double centerY,
|
||||
double innerRadius,
|
||||
double radius, {
|
||||
int? ringCount,
|
||||
List<double>? ringRadii,
|
||||
}) {
|
||||
_targetCenterX = centerX;
|
||||
_targetCenterY = centerY;
|
||||
_targetInnerRadius = innerRadius;
|
||||
_targetRadius = radius;
|
||||
if (ringCount != null) {
|
||||
_ringCount = ringCount;
|
||||
@@ -365,6 +518,43 @@ class AnalysisProvider extends ChangeNotifier {
|
||||
notifyListeners();
|
||||
}
|
||||
|
||||
/// Auto-calibrate target using OpenCV
|
||||
Future<bool> autoCalibrateTarget() async {
|
||||
if (_imagePath == null) return false;
|
||||
|
||||
try {
|
||||
// 1. Attempt to correct perspective/distortion first
|
||||
final correctedPath = await _distortionService
|
||||
.correctPerspectiveWithConcentricMesh(_imagePath!);
|
||||
|
||||
if (correctedPath != _imagePath) {
|
||||
_imagePath = correctedPath;
|
||||
_correctedImagePath = correctedPath;
|
||||
_distortionCorrectionEnabled = true;
|
||||
_imageAspectRatio =
|
||||
1.0; // The corrected image is always square (side x side)
|
||||
notifyListeners();
|
||||
}
|
||||
|
||||
// 2. Detect the target on the straight/corrected image
|
||||
final result = await _opencvTargetService.detectTarget(_imagePath!);
|
||||
|
||||
if (result.success) {
|
||||
adjustTargetPosition(
|
||||
result.centerX,
|
||||
result.centerY,
|
||||
result.radius * 0.1,
|
||||
result.radius,
|
||||
);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
} catch (e) {
|
||||
print('Auto-calibration error: $e');
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/// Calcule les paramètres de distorsion basés sur la calibration actuelle
|
||||
void calculateDistortion() {
|
||||
_distortionParams = _distortionService.calculateDistortionFromCalibration(
|
||||
@@ -405,6 +595,44 @@ class AnalysisProvider extends ChangeNotifier {
|
||||
}
|
||||
}
|
||||
|
||||
/* version deux a tester*/
|
||||
/// Calcule ET applique la correction pour un feedback immédiat
|
||||
Future<void> calculateAndApplyDistortion() async {
|
||||
// 1. Calcul des paramètres (votre code actuel)
|
||||
_distortionParams = _distortionService.calculateDistortionFromCalibration(
|
||||
targetCenterX: _targetCenterX,
|
||||
targetCenterY: _targetCenterY,
|
||||
targetRadius: _targetRadius,
|
||||
imageAspectRatio: _imageAspectRatio,
|
||||
);
|
||||
|
||||
// 2. Vérification si une correction est réellement nécessaire
|
||||
if (_distortionParams != null && _distortionParams!.needsCorrection) {
|
||||
// 3. Application immédiate de la transformation (méthode asynchrone)
|
||||
await applyDistortionCorrection();
|
||||
} else {
|
||||
notifyListeners(); // On prévient quand même si pas de correction
|
||||
}
|
||||
}
|
||||
|
||||
Future<void> runFullDistortionWorkflow() async {
|
||||
_state = AnalysisState.loading; // Affiche un spinner sur votre UI
|
||||
notifyListeners();
|
||||
|
||||
try {
|
||||
calculateDistortion(); // Calcule les paramètres
|
||||
await applyDistortionCorrection(); // Génère le fichier corrigé
|
||||
_distortionCorrectionEnabled = true; // Active l'affichage
|
||||
_state = AnalysisState.success;
|
||||
} catch (e) {
|
||||
_errorMessage = "Erreur de rendu : $e";
|
||||
_state = AnalysisState.error;
|
||||
} finally {
|
||||
notifyListeners();
|
||||
}
|
||||
}
|
||||
/* fin section deux a tester*/
|
||||
|
||||
int _calculateShotScore(double x, double y) {
|
||||
if (_targetType == TargetType.concentric) {
|
||||
return _scoreCalculatorService.calculateConcentricScore(
|
||||
@@ -484,6 +712,7 @@ class AnalysisProvider extends ChangeNotifier {
|
||||
_targetCenterX = 0.5;
|
||||
_targetCenterY = 0.5;
|
||||
_targetRadius = 0.4;
|
||||
_targetInnerRadius = 0.04;
|
||||
_ringCount = 10;
|
||||
_ringRadii = null;
|
||||
_imageAspectRatio = 1.0;
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -13,16 +13,26 @@ class TargetCalibration extends StatefulWidget {
|
||||
final double initialCenterX;
|
||||
final double initialCenterY;
|
||||
final double initialRadius;
|
||||
final double initialInnerRadius;
|
||||
final int initialRingCount;
|
||||
final TargetType targetType;
|
||||
final List<double>? initialRingRadii;
|
||||
final Function(double centerX, double centerY, double radius, int ringCount, {List<double>? ringRadii}) onCalibrationChanged;
|
||||
final Function(
|
||||
double centerX,
|
||||
double centerY,
|
||||
double innerRadius,
|
||||
double radius,
|
||||
int ringCount, {
|
||||
List<double>? ringRadii,
|
||||
})
|
||||
onCalibrationChanged;
|
||||
|
||||
const TargetCalibration({
|
||||
super.key,
|
||||
required this.initialCenterX,
|
||||
required this.initialCenterY,
|
||||
required this.initialRadius,
|
||||
required this.initialInnerRadius,
|
||||
this.initialRingCount = 10,
|
||||
required this.targetType,
|
||||
this.initialRingRadii,
|
||||
@@ -37,11 +47,13 @@ class _TargetCalibrationState extends State<TargetCalibration> {
|
||||
late double _centerX;
|
||||
late double _centerY;
|
||||
late double _radius;
|
||||
late double _innerRadius;
|
||||
late int _ringCount;
|
||||
late List<double> _ringRadii;
|
||||
|
||||
bool _isDraggingCenter = false;
|
||||
bool _isDraggingRadius = false;
|
||||
bool _isDraggingInnerRadius = false;
|
||||
|
||||
@override
|
||||
void initState() {
|
||||
@@ -49,28 +61,57 @@ class _TargetCalibrationState extends State<TargetCalibration> {
|
||||
_centerX = widget.initialCenterX;
|
||||
_centerY = widget.initialCenterY;
|
||||
_radius = widget.initialRadius;
|
||||
_innerRadius = widget.initialInnerRadius;
|
||||
_ringCount = widget.initialRingCount;
|
||||
_initRingRadii();
|
||||
}
|
||||
|
||||
void _initRingRadii() {
|
||||
if (widget.initialRingRadii != null && widget.initialRingRadii!.length == _ringCount) {
|
||||
if (widget.initialRingRadii != null &&
|
||||
widget.initialRingRadii!.length == _ringCount) {
|
||||
_ringRadii = List.from(widget.initialRingRadii!);
|
||||
} else {
|
||||
// Initialize with default proportional radii
|
||||
_ringRadii = List.generate(_ringCount, (i) => (i + 1) / _ringCount);
|
||||
// Initialize with default proportional radii interpolated between inner and outer
|
||||
_ringRadii = List.generate(_ringCount, (i) {
|
||||
if (_ringCount <= 1) return 1.0;
|
||||
final ratio = _innerRadius / _radius;
|
||||
return ratio + (1.0 - ratio) * i / (_ringCount - 1);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
@override
|
||||
void didUpdateWidget(TargetCalibration oldWidget) {
|
||||
super.didUpdateWidget(oldWidget);
|
||||
bool shouldReinit = false;
|
||||
|
||||
if (widget.initialCenterX != oldWidget.initialCenterX &&
|
||||
!_isDraggingCenter) {
|
||||
_centerX = widget.initialCenterX;
|
||||
}
|
||||
if (widget.initialCenterY != oldWidget.initialCenterY &&
|
||||
!_isDraggingCenter) {
|
||||
_centerY = widget.initialCenterY;
|
||||
}
|
||||
if (widget.initialRingCount != oldWidget.initialRingCount) {
|
||||
_ringCount = widget.initialRingCount;
|
||||
_initRingRadii();
|
||||
shouldReinit = true;
|
||||
}
|
||||
if (widget.initialRadius != oldWidget.initialRadius && !_isDraggingRadius) {
|
||||
_radius = widget.initialRadius;
|
||||
shouldReinit = true;
|
||||
}
|
||||
if (widget.initialInnerRadius != oldWidget.initialInnerRadius &&
|
||||
!_isDraggingInnerRadius) {
|
||||
_innerRadius = widget.initialInnerRadius;
|
||||
shouldReinit = true;
|
||||
}
|
||||
if (widget.initialRingRadii != oldWidget.initialRingRadii) {
|
||||
shouldReinit = true;
|
||||
}
|
||||
|
||||
if (shouldReinit) {
|
||||
_initRingRadii();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -90,11 +131,13 @@ class _TargetCalibrationState extends State<TargetCalibration> {
|
||||
centerX: _centerX,
|
||||
centerY: _centerY,
|
||||
radius: _radius,
|
||||
innerRadius: _innerRadius,
|
||||
ringCount: _ringCount,
|
||||
ringRadii: _ringRadii,
|
||||
targetType: widget.targetType,
|
||||
isDraggingCenter: _isDraggingCenter,
|
||||
isDraggingRadius: _isDraggingRadius,
|
||||
isDraggingInnerRadius: _isDraggingInnerRadius,
|
||||
),
|
||||
),
|
||||
);
|
||||
@@ -109,21 +152,42 @@ class _TargetCalibrationState extends State<TargetCalibration> {
|
||||
// Check if tapping on center handle
|
||||
final distToCenter = _distance(tapX, tapY, _centerX, _centerY);
|
||||
|
||||
// Check if tapping on radius handle (on the right edge of the outermost circle)
|
||||
// Check if tapping on outer radius handle
|
||||
final minDim = math.min(size.width, size.height);
|
||||
final outerRadius = _radius * (_ringRadii.isNotEmpty ? _ringRadii.last : 1.0);
|
||||
final outerRadius = _radius;
|
||||
final radiusHandleX = _centerX + outerRadius * minDim / size.width;
|
||||
final radiusHandleY = _centerY;
|
||||
final distToRadiusHandle = _distance(tapX, tapY, radiusHandleX.clamp(0.0, 1.0), radiusHandleY.clamp(0.0, 1.0));
|
||||
final distToOuterHandle = _distance(
|
||||
tapX,
|
||||
tapY,
|
||||
radiusHandleX.clamp(0.0, 1.0),
|
||||
radiusHandleY.clamp(0.0, 1.0),
|
||||
);
|
||||
|
||||
// Check if tapping on inner radius handle (top edge of innermost circle)
|
||||
final actualInnerRadius = _innerRadius;
|
||||
final innerHandleX = _centerX;
|
||||
final innerHandleY = _centerY - actualInnerRadius * minDim / size.height;
|
||||
final distToInnerHandle = _distance(
|
||||
tapX,
|
||||
tapY,
|
||||
innerHandleX.clamp(0.0, 1.0),
|
||||
innerHandleY.clamp(0.0, 1.0),
|
||||
);
|
||||
|
||||
// Increase touch target size slightly for handles
|
||||
if (distToCenter < 0.05) {
|
||||
setState(() {
|
||||
_isDraggingCenter = true;
|
||||
});
|
||||
} else if (distToRadiusHandle < 0.05) {
|
||||
} else if (distToOuterHandle < 0.05) {
|
||||
setState(() {
|
||||
_isDraggingRadius = true;
|
||||
});
|
||||
} else if (distToInnerHandle < 0.05) {
|
||||
setState(() {
|
||||
_isDraggingInnerRadius = true;
|
||||
});
|
||||
} else if (distToCenter < _radius + 0.02) {
|
||||
// Tapping inside the target - move center
|
||||
setState(() {
|
||||
@@ -143,19 +207,36 @@ class _TargetCalibrationState extends State<TargetCalibration> {
|
||||
_centerX = _centerX + deltaX;
|
||||
_centerY = _centerY + deltaY;
|
||||
} else if (_isDraggingRadius) {
|
||||
// Adjust outer radius (scales all rings proportionally)
|
||||
// Adjust outer radius
|
||||
final newRadius = _radius + deltaX * (size.width / minDim);
|
||||
_radius = newRadius.clamp(0.05, 3.0);
|
||||
_radius = newRadius.clamp(math.max(0.05, _innerRadius + 0.01), 3.0);
|
||||
_initRingRadii(); // Recalculate linear separation
|
||||
} else if (_isDraggingInnerRadius) {
|
||||
// Adjust inner radius (sliding up reduces Y, so deltaY is negative when growing. Thus we subtract deltaY)
|
||||
final newInnerRadius = _innerRadius - deltaY * (size.height / minDim);
|
||||
_innerRadius = newInnerRadius.clamp(
|
||||
0.01,
|
||||
math.max(0.01, _radius - 0.01),
|
||||
);
|
||||
_initRingRadii(); // Recalculate linear separation
|
||||
}
|
||||
});
|
||||
|
||||
widget.onCalibrationChanged(_centerX, _centerY, _radius, _ringCount, ringRadii: _ringRadii);
|
||||
widget.onCalibrationChanged(
|
||||
_centerX,
|
||||
_centerY,
|
||||
_innerRadius,
|
||||
_radius,
|
||||
_ringCount,
|
||||
ringRadii: _ringRadii,
|
||||
);
|
||||
}
|
||||
|
||||
void _onPanEnd() {
|
||||
setState(() {
|
||||
_isDraggingCenter = false;
|
||||
_isDraggingRadius = false;
|
||||
_isDraggingInnerRadius = false;
|
||||
});
|
||||
}
|
||||
|
||||
@@ -170,21 +251,25 @@ class _CalibrationPainter extends CustomPainter {
|
||||
final double centerX;
|
||||
final double centerY;
|
||||
final double radius;
|
||||
final double innerRadius;
|
||||
final int ringCount;
|
||||
final List<double> ringRadii;
|
||||
final TargetType targetType;
|
||||
final bool isDraggingCenter;
|
||||
final bool isDraggingRadius;
|
||||
final bool isDraggingInnerRadius;
|
||||
|
||||
_CalibrationPainter({
|
||||
required this.centerX,
|
||||
required this.centerY,
|
||||
required this.radius,
|
||||
required this.innerRadius,
|
||||
required this.ringCount,
|
||||
required this.ringRadii,
|
||||
required this.targetType,
|
||||
required this.isDraggingCenter,
|
||||
required this.isDraggingRadius,
|
||||
required this.isDraggingInnerRadius,
|
||||
});
|
||||
|
||||
@override
|
||||
@@ -192,6 +277,7 @@ class _CalibrationPainter extends CustomPainter {
|
||||
final centerPx = Offset(centerX * size.width, centerY * size.height);
|
||||
final minDim = size.width < size.height ? size.width : size.height;
|
||||
final baseRadiusPx = radius * minDim;
|
||||
final innerRadiusPx = innerRadius * minDim;
|
||||
|
||||
if (targetType == TargetType.concentric) {
|
||||
_drawConcentricZones(canvas, size, centerPx, baseRadiusPx);
|
||||
@@ -199,17 +285,42 @@ class _CalibrationPainter extends CustomPainter {
|
||||
_drawSilhouetteZones(canvas, size, centerPx, baseRadiusPx);
|
||||
}
|
||||
|
||||
// Fullscreen crosshairs when dragging center
|
||||
if (isDraggingCenter) {
|
||||
final crosshairLinePaint = Paint()
|
||||
..color = AppTheme.successColor.withValues(alpha: 0.5)
|
||||
..strokeWidth = 1;
|
||||
canvas.drawLine(
|
||||
Offset(0, centerPx.dy),
|
||||
Offset(size.width, centerPx.dy),
|
||||
crosshairLinePaint,
|
||||
);
|
||||
canvas.drawLine(
|
||||
Offset(centerPx.dx, 0),
|
||||
Offset(centerPx.dx, size.height),
|
||||
crosshairLinePaint,
|
||||
);
|
||||
}
|
||||
|
||||
// Draw center handle
|
||||
_drawCenterHandle(canvas, centerPx);
|
||||
|
||||
// Draw radius handle (for outer ring)
|
||||
_drawRadiusHandle(canvas, size, centerPx, baseRadiusPx);
|
||||
|
||||
// Draw inner radius handle
|
||||
_drawInnerRadiusHandle(canvas, size, centerPx, innerRadiusPx);
|
||||
|
||||
// Draw instructions
|
||||
_drawInstructions(canvas, size);
|
||||
}
|
||||
|
||||
void _drawConcentricZones(Canvas canvas, Size size, Offset center, double baseRadius) {
|
||||
void _drawConcentricZones(
|
||||
Canvas canvas,
|
||||
Size size,
|
||||
Offset center,
|
||||
double baseRadius,
|
||||
) {
|
||||
// Generate colors for zones
|
||||
List<Color> zoneColors = [];
|
||||
for (int i = 0; i < ringCount; i++) {
|
||||
@@ -235,7 +346,9 @@ class _CalibrationPainter extends CustomPainter {
|
||||
|
||||
// Draw from outside to inside
|
||||
for (int i = ringCount - 1; i >= 0; i--) {
|
||||
final ringRadius = ringRadii.length > i ? ringRadii[i] : (i + 1) / ringCount;
|
||||
final ringRadius = ringRadii.length > i
|
||||
? ringRadii[i]
|
||||
: (i + 1) / ringCount;
|
||||
final zoneRadius = baseRadius * ringRadius;
|
||||
|
||||
zonePaint.color = zoneColors[i];
|
||||
@@ -244,12 +357,12 @@ class _CalibrationPainter extends CustomPainter {
|
||||
}
|
||||
|
||||
// Draw zone labels (only if within visible area)
|
||||
final textPainter = TextPainter(
|
||||
textDirection: TextDirection.ltr,
|
||||
);
|
||||
final textPainter = TextPainter(textDirection: TextDirection.ltr);
|
||||
|
||||
for (int i = 0; i < ringCount; i++) {
|
||||
final ringRadius = ringRadii.length > i ? ringRadii[i] : (i + 1) / ringCount;
|
||||
final ringRadius = ringRadii.length > i
|
||||
? ringRadii[i]
|
||||
: (i + 1) / ringCount;
|
||||
final prevRingRadius = i > 0
|
||||
? (ringRadii.length > i - 1 ? ringRadii[i - 1] : i / ringCount)
|
||||
: 0.0;
|
||||
@@ -268,9 +381,7 @@ class _CalibrationPainter extends CustomPainter {
|
||||
color: Colors.white.withValues(alpha: 0.9),
|
||||
fontSize: 12,
|
||||
fontWeight: FontWeight.bold,
|
||||
shadows: const [
|
||||
Shadow(color: Colors.black, blurRadius: 2),
|
||||
],
|
||||
shadows: const [Shadow(color: Colors.black, blurRadius: 2)],
|
||||
),
|
||||
);
|
||||
textPainter.layout();
|
||||
@@ -278,14 +389,24 @@ class _CalibrationPainter extends CustomPainter {
|
||||
// Draw label on the right side of each zone
|
||||
final labelY = center.dy - textPainter.height / 2;
|
||||
if (labelY >= 0 && labelY <= size.height) {
|
||||
textPainter.paint(canvas, Offset(labelX - textPainter.width / 2, labelY));
|
||||
textPainter.paint(
|
||||
canvas,
|
||||
Offset(labelX - textPainter.width / 2, labelY),
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void _drawSilhouetteZones(Canvas canvas, Size size, Offset center, double radius) {
|
||||
void _drawSilhouetteZones(
|
||||
Canvas canvas,
|
||||
Size size,
|
||||
Offset center,
|
||||
double radius,
|
||||
) {
|
||||
// Simplified silhouette zones
|
||||
final paint = Paint()..style = PaintingStyle.stroke..strokeWidth = 2;
|
||||
final paint = Paint()
|
||||
..style = PaintingStyle.stroke
|
||||
..strokeWidth = 2;
|
||||
|
||||
// Draw silhouette outline (simplified as rectangle for now)
|
||||
final silhouetteWidth = radius * 0.8;
|
||||
@@ -293,7 +414,11 @@ class _CalibrationPainter extends CustomPainter {
|
||||
|
||||
paint.color = Colors.green.withValues(alpha: 0.5);
|
||||
canvas.drawRect(
|
||||
Rect.fromCenter(center: center, width: silhouetteWidth, height: silhouetteHeight),
|
||||
Rect.fromCenter(
|
||||
center: center,
|
||||
width: silhouetteWidth,
|
||||
height: silhouetteHeight,
|
||||
),
|
||||
paint,
|
||||
);
|
||||
}
|
||||
@@ -316,17 +441,36 @@ class _CalibrationPainter extends CustomPainter {
|
||||
final crossPaint = Paint()
|
||||
..color = isDraggingCenter ? AppTheme.successColor : AppTheme.primaryColor
|
||||
..strokeWidth = 2;
|
||||
canvas.drawLine(Offset(center.dx - 20, center.dy), Offset(center.dx - 8, center.dy), crossPaint);
|
||||
canvas.drawLine(Offset(center.dx + 8, center.dy), Offset(center.dx + 20, center.dy), crossPaint);
|
||||
canvas.drawLine(Offset(center.dx, center.dy - 20), Offset(center.dx, center.dy - 8), crossPaint);
|
||||
canvas.drawLine(Offset(center.dx, center.dy + 8), Offset(center.dx, center.dy + 20), crossPaint);
|
||||
canvas.drawLine(
|
||||
Offset(center.dx - 20, center.dy),
|
||||
Offset(center.dx - 8, center.dy),
|
||||
crossPaint,
|
||||
);
|
||||
canvas.drawLine(
|
||||
Offset(center.dx + 8, center.dy),
|
||||
Offset(center.dx + 20, center.dy),
|
||||
crossPaint,
|
||||
);
|
||||
canvas.drawLine(
|
||||
Offset(center.dx, center.dy - 20),
|
||||
Offset(center.dx, center.dy - 8),
|
||||
crossPaint,
|
||||
);
|
||||
canvas.drawLine(
|
||||
Offset(center.dx, center.dy + 8),
|
||||
Offset(center.dx, center.dy + 20),
|
||||
crossPaint,
|
||||
);
|
||||
}
|
||||
|
||||
void _drawRadiusHandle(Canvas canvas, Size size, Offset center, double baseRadius) {
|
||||
void _drawRadiusHandle(
|
||||
Canvas canvas,
|
||||
Size size,
|
||||
Offset center,
|
||||
double baseRadius,
|
||||
) {
|
||||
// Radius handle on the right edge of the outermost ring
|
||||
final outerRingRadius = ringRadii.isNotEmpty ? ringRadii.last : 1.0;
|
||||
final actualRadius = baseRadius * outerRingRadius;
|
||||
final actualHandleX = center.dx + actualRadius;
|
||||
final actualHandleX = center.dx + baseRadius;
|
||||
final clampedHandleX = actualHandleX.clamp(20.0, size.width - 20);
|
||||
final clampedHandleY = center.dy.clamp(20.0, size.height - 20);
|
||||
final handlePos = Offset(clampedHandleX, clampedHandleY);
|
||||
@@ -376,7 +520,7 @@ class _CalibrationPainter extends CustomPainter {
|
||||
// Label
|
||||
final textPainter = TextPainter(
|
||||
text: const TextSpan(
|
||||
text: 'RAYON',
|
||||
text: 'EXT.',
|
||||
style: TextStyle(
|
||||
color: Colors.white,
|
||||
fontSize: 8,
|
||||
@@ -392,6 +536,78 @@ class _CalibrationPainter extends CustomPainter {
|
||||
);
|
||||
}
|
||||
|
||||
void _drawInnerRadiusHandle(
|
||||
Canvas canvas,
|
||||
Size size,
|
||||
Offset center,
|
||||
double innerRadiusPx,
|
||||
) {
|
||||
// Inner radius handle on the top edge of the innermost ring
|
||||
final actualHandleY = center.dy - innerRadiusPx;
|
||||
final clampedHandleX = center.dx.clamp(20.0, size.width - 20);
|
||||
final clampedHandleY = actualHandleY.clamp(20.0, size.height - 20);
|
||||
final handlePos = Offset(clampedHandleX, clampedHandleY);
|
||||
|
||||
final isClamped = actualHandleY < 20.0;
|
||||
|
||||
final paint = Paint()
|
||||
..color = isDraggingInnerRadius
|
||||
? AppTheme.successColor
|
||||
: (isClamped ? Colors.orange : Colors.purpleAccent)
|
||||
..style = PaintingStyle.fill;
|
||||
|
||||
// Draw handle
|
||||
canvas.drawCircle(handlePos, 14, paint);
|
||||
|
||||
// Up/Down arrow indicators
|
||||
final arrowPaint = Paint()
|
||||
..color = Colors.white
|
||||
..strokeWidth = 2
|
||||
..style = PaintingStyle.stroke;
|
||||
|
||||
// Up arrow
|
||||
canvas.drawLine(
|
||||
Offset(handlePos.dx, handlePos.dy - 4),
|
||||
Offset(handlePos.dx - 4, handlePos.dy - 8),
|
||||
arrowPaint,
|
||||
);
|
||||
canvas.drawLine(
|
||||
Offset(handlePos.dx, handlePos.dy - 4),
|
||||
Offset(handlePos.dx + 4, handlePos.dy - 8),
|
||||
arrowPaint,
|
||||
);
|
||||
|
||||
// Down arrow
|
||||
canvas.drawLine(
|
||||
Offset(handlePos.dx, handlePos.dy + 4),
|
||||
Offset(handlePos.dx - 4, handlePos.dy + 8),
|
||||
arrowPaint,
|
||||
);
|
||||
canvas.drawLine(
|
||||
Offset(handlePos.dx, handlePos.dy + 4),
|
||||
Offset(handlePos.dx + 4, handlePos.dy + 8),
|
||||
arrowPaint,
|
||||
);
|
||||
|
||||
// Label
|
||||
final textPainter = TextPainter(
|
||||
text: const TextSpan(
|
||||
text: 'INT.',
|
||||
style: TextStyle(
|
||||
color: Colors.white,
|
||||
fontSize: 8,
|
||||
fontWeight: FontWeight.bold,
|
||||
),
|
||||
),
|
||||
textDirection: TextDirection.ltr,
|
||||
);
|
||||
textPainter.layout();
|
||||
textPainter.paint(
|
||||
canvas,
|
||||
Offset(handlePos.dx - textPainter.width / 2, handlePos.dy - 24),
|
||||
);
|
||||
}
|
||||
|
||||
void _drawInstructions(Canvas canvas, Size size) {
|
||||
const instruction = 'Deplacez le centre ou ajustez le rayon';
|
||||
|
||||
@@ -418,9 +634,11 @@ class _CalibrationPainter extends CustomPainter {
|
||||
return centerX != oldDelegate.centerX ||
|
||||
centerY != oldDelegate.centerY ||
|
||||
radius != oldDelegate.radius ||
|
||||
innerRadius != oldDelegate.innerRadius ||
|
||||
ringCount != oldDelegate.ringCount ||
|
||||
isDraggingCenter != oldDelegate.isDraggingCenter ||
|
||||
isDraggingRadius != oldDelegate.isDraggingRadius ||
|
||||
isDraggingInnerRadius != oldDelegate.isDraggingInnerRadius ||
|
||||
ringRadii != oldDelegate.ringRadii;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,13 +6,13 @@
|
||||
library;
|
||||
|
||||
import 'dart:io';
|
||||
import 'package:google_mlkit_document_scanner/google_mlkit_document_scanner.dart';
|
||||
import 'package:flutter/material.dart';
|
||||
import 'package:image_picker/image_picker.dart';
|
||||
import '../../core/constants/app_constants.dart';
|
||||
import '../../core/theme/app_theme.dart';
|
||||
import '../../data/models/target_type.dart';
|
||||
import '../crop/crop_screen.dart';
|
||||
import 'widgets/target_type_selector.dart';
|
||||
import 'widgets/image_source_button.dart';
|
||||
|
||||
class CaptureScreen extends StatefulWidget {
|
||||
@@ -31,23 +31,12 @@ class _CaptureScreenState extends State<CaptureScreen> {
|
||||
@override
|
||||
Widget build(BuildContext context) {
|
||||
return Scaffold(
|
||||
appBar: AppBar(
|
||||
title: const Text('Nouvelle Analyse'),
|
||||
),
|
||||
appBar: AppBar(title: const Text('Nouvelle Analyse')),
|
||||
body: SingleChildScrollView(
|
||||
padding: const EdgeInsets.all(AppConstants.defaultPadding),
|
||||
child: Column(
|
||||
crossAxisAlignment: CrossAxisAlignment.stretch,
|
||||
children: [
|
||||
// Target type selection
|
||||
_buildSectionTitle('Type de Cible'),
|
||||
const SizedBox(height: 12),
|
||||
TargetTypeSelector(
|
||||
selectedType: _selectedType,
|
||||
onTypeSelected: (type) {
|
||||
setState(() => _selectedType = type);
|
||||
},
|
||||
),
|
||||
const SizedBox(height: AppConstants.largePadding),
|
||||
|
||||
// Image source selection
|
||||
@@ -58,8 +47,8 @@ class _CaptureScreenState extends State<CaptureScreen> {
|
||||
Expanded(
|
||||
child: ImageSourceButton(
|
||||
icon: Icons.camera_alt,
|
||||
label: 'Camera',
|
||||
onPressed: _isLoading ? null : () => _captureImage(ImageSource.camera),
|
||||
label: 'Scanner',
|
||||
onPressed: _isLoading ? null : _scanDocument,
|
||||
),
|
||||
),
|
||||
const SizedBox(width: 12),
|
||||
@@ -67,7 +56,9 @@ class _CaptureScreenState extends State<CaptureScreen> {
|
||||
child: ImageSourceButton(
|
||||
icon: Icons.photo_library,
|
||||
label: 'Galerie',
|
||||
onPressed: _isLoading ? null : () => _captureImage(ImageSource.gallery),
|
||||
onPressed: _isLoading
|
||||
? null
|
||||
: () => _captureImage(ImageSource.gallery),
|
||||
),
|
||||
),
|
||||
],
|
||||
@@ -86,16 +77,15 @@ class _CaptureScreenState extends State<CaptureScreen> {
|
||||
_buildImagePreview(),
|
||||
|
||||
// Guide text
|
||||
if (_selectedImagePath == null && !_isLoading)
|
||||
_buildGuide(),
|
||||
if (_selectedImagePath == null && !_isLoading) _buildGuide(),
|
||||
],
|
||||
),
|
||||
),
|
||||
floatingActionButton: _selectedImagePath != null
|
||||
? FloatingActionButton.extended(
|
||||
onPressed: _analyzeImage,
|
||||
icon: const Icon(Icons.analytics),
|
||||
label: const Text('Analyser'),
|
||||
icon: const Icon(Icons.arrow_forward),
|
||||
label: const Text('Suivant'),
|
||||
)
|
||||
: null,
|
||||
);
|
||||
@@ -104,9 +94,9 @@ class _CaptureScreenState extends State<CaptureScreen> {
|
||||
Widget _buildSectionTitle(String title) {
|
||||
return Text(
|
||||
title,
|
||||
style: Theme.of(context).textTheme.titleMedium?.copyWith(
|
||||
fontWeight: FontWeight.bold,
|
||||
),
|
||||
style: Theme.of(
|
||||
context,
|
||||
).textTheme.titleMedium?.copyWith(fontWeight: FontWeight.bold),
|
||||
);
|
||||
}
|
||||
|
||||
@@ -159,7 +149,9 @@ class _CaptureScreenState extends State<CaptureScreen> {
|
||||
Expanded(
|
||||
child: Text(
|
||||
'Assurez-vous que la cible est bien centree et visible.',
|
||||
style: TextStyle(color: AppTheme.warningColor.withValues(alpha: 0.8)),
|
||||
style: TextStyle(
|
||||
color: AppTheme.warningColor.withValues(alpha: 0.8),
|
||||
),
|
||||
),
|
||||
),
|
||||
],
|
||||
@@ -174,20 +166,19 @@ class _CaptureScreenState extends State<CaptureScreen> {
|
||||
padding: const EdgeInsets.all(AppConstants.defaultPadding),
|
||||
child: Column(
|
||||
children: [
|
||||
Icon(
|
||||
Icons.help_outline,
|
||||
size: 48,
|
||||
color: Colors.grey[400],
|
||||
),
|
||||
Icon(Icons.help_outline, size: 48, color: Colors.grey[400]),
|
||||
const SizedBox(height: 12),
|
||||
Text(
|
||||
'Conseils pour une bonne analyse',
|
||||
style: Theme.of(context).textTheme.titleSmall?.copyWith(
|
||||
fontWeight: FontWeight.bold,
|
||||
),
|
||||
style: Theme.of(
|
||||
context,
|
||||
).textTheme.titleSmall?.copyWith(fontWeight: FontWeight.bold),
|
||||
),
|
||||
const SizedBox(height: 12),
|
||||
_buildGuideItem(Icons.crop_free, 'Cadrez la cible entiere dans l\'image'),
|
||||
_buildGuideItem(
|
||||
Icons.crop_free,
|
||||
'Cadrez la cible entiere dans l\'image',
|
||||
),
|
||||
_buildGuideItem(Icons.wb_sunny, 'Utilisez un bon eclairage'),
|
||||
_buildGuideItem(Icons.straighten, 'Prenez la photo de face'),
|
||||
_buildGuideItem(Icons.blur_off, 'Evitez les images floues'),
|
||||
@@ -210,6 +201,39 @@ class _CaptureScreenState extends State<CaptureScreen> {
|
||||
);
|
||||
}
|
||||
|
||||
Future<void> _scanDocument() async {
|
||||
setState(() => _isLoading = true);
|
||||
|
||||
try {
|
||||
final options = DocumentScannerOptions(
|
||||
documentFormat: DocumentFormat.jpeg,
|
||||
mode: ScannerMode.base,
|
||||
pageLimit: 1,
|
||||
isGalleryImport: false,
|
||||
);
|
||||
|
||||
final scanner = DocumentScanner(options: options);
|
||||
final documents = await scanner.scanDocument();
|
||||
|
||||
if (documents.images.isNotEmpty) {
|
||||
setState(() => _selectedImagePath = documents.images.first);
|
||||
}
|
||||
} catch (e) {
|
||||
if (mounted) {
|
||||
ScaffoldMessenger.of(context).showSnackBar(
|
||||
SnackBar(
|
||||
content: Text('Erreur lors du scan: $e'),
|
||||
backgroundColor: AppTheme.errorColor,
|
||||
),
|
||||
);
|
||||
}
|
||||
} finally {
|
||||
if (mounted) {
|
||||
setState(() => _isLoading = false);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Future<void> _captureImage(ImageSource source) async {
|
||||
setState(() => _isLoading = true);
|
||||
|
||||
|
||||
@@ -119,7 +119,8 @@ class _CropScreenState extends State<CropScreen> {
|
||||
_viewportSize = Size(constraints.maxWidth, constraints.maxHeight);
|
||||
|
||||
// Taille du carré de crop (90% de la plus petite dimension)
|
||||
_cropSize = math.min(constraints.maxWidth, constraints.maxHeight) * 0.85;
|
||||
_cropSize =
|
||||
math.min(constraints.maxWidth, constraints.maxHeight) * 0.85;
|
||||
|
||||
// Calculer l'échelle initiale si pas encore fait
|
||||
if (_scale == 1.0 && _offset == Offset.zero) {
|
||||
@@ -138,7 +139,7 @@ class _CropScreenState extends State<CropScreen> {
|
||||
child: Transform(
|
||||
transform: Matrix4.identity()
|
||||
..setTranslationRaw(_offset.dx, _offset.dy, 0)
|
||||
..scale(_scale, _scale, 1.0),
|
||||
..scale(_scale, _scale),
|
||||
alignment: Alignment.center,
|
||||
child: Image.file(
|
||||
File(widget.imagePath),
|
||||
@@ -153,10 +154,7 @@ class _CropScreenState extends State<CropScreen> {
|
||||
// Overlay de recadrage
|
||||
Positioned.fill(
|
||||
child: IgnorePointer(
|
||||
child: CropOverlay(
|
||||
cropSize: _cropSize,
|
||||
showGrid: true,
|
||||
),
|
||||
child: CropOverlay(cropSize: _cropSize, showGrid: true),
|
||||
),
|
||||
),
|
||||
|
||||
|
||||
@@ -13,20 +13,13 @@ class CropOverlay extends StatelessWidget {
|
||||
/// Afficher la grille des tiers
|
||||
final bool showGrid;
|
||||
|
||||
const CropOverlay({
|
||||
super.key,
|
||||
required this.cropSize,
|
||||
this.showGrid = true,
|
||||
});
|
||||
const CropOverlay({super.key, required this.cropSize, this.showGrid = true});
|
||||
|
||||
@override
|
||||
Widget build(BuildContext context) {
|
||||
return CustomPaint(
|
||||
size: Size.infinite,
|
||||
painter: _CropOverlayPainter(
|
||||
cropSize: cropSize,
|
||||
showGrid: showGrid,
|
||||
),
|
||||
painter: _CropOverlayPainter(cropSize: cropSize, showGrid: showGrid),
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -35,10 +28,7 @@ class _CropOverlayPainter extends CustomPainter {
|
||||
final double cropSize;
|
||||
final bool showGrid;
|
||||
|
||||
_CropOverlayPainter({
|
||||
required this.cropSize,
|
||||
required this.showGrid,
|
||||
});
|
||||
_CropOverlayPainter({required this.cropSize, required this.showGrid});
|
||||
|
||||
@override
|
||||
void paint(Canvas canvas, Size size) {
|
||||
@@ -77,6 +67,9 @@ class _CropOverlayPainter extends CustomPainter {
|
||||
if (showGrid) {
|
||||
_drawGrid(canvas, cropRect);
|
||||
}
|
||||
|
||||
// Dessiner le point central (croix)
|
||||
_drawCenterPoint(canvas, cropRect);
|
||||
}
|
||||
|
||||
void _drawCorners(Canvas canvas, Rect rect) {
|
||||
@@ -171,6 +164,38 @@ class _CropOverlayPainter extends CustomPainter {
|
||||
);
|
||||
}
|
||||
|
||||
void _drawCenterPoint(Canvas canvas, Rect rect) {
|
||||
final centerPaint = Paint()
|
||||
..color = Colors.white.withValues(alpha: 0.8)
|
||||
..style = PaintingStyle.stroke
|
||||
..strokeWidth = 2;
|
||||
|
||||
const size = 10.0;
|
||||
final centerX = rect.center.dx;
|
||||
final centerY = rect.center.dy;
|
||||
|
||||
// Ligne horizontale
|
||||
canvas.drawLine(
|
||||
Offset(centerX - size, centerY),
|
||||
Offset(centerX + size, centerY),
|
||||
centerPaint,
|
||||
);
|
||||
|
||||
// Ligne verticale
|
||||
canvas.drawLine(
|
||||
Offset(centerX, centerY - size),
|
||||
Offset(centerX, centerY + size),
|
||||
centerPaint,
|
||||
);
|
||||
|
||||
// Petit cercle central pour précision (optionnel, mais aide à viser)
|
||||
canvas.drawCircle(
|
||||
rect.center,
|
||||
2,
|
||||
Paint()..color = Colors.red.withValues(alpha: 0.6),
|
||||
);
|
||||
}
|
||||
|
||||
@override
|
||||
bool shouldRepaint(covariant _CropOverlayPainter oldDelegate) {
|
||||
return cropSize != oldDelegate.cropSize || showGrid != oldDelegate.showGrid;
|
||||
|
||||
@@ -148,21 +148,27 @@ class _HomeScreenState extends State<HomeScreen> {
|
||||
Text(
|
||||
'Statistiques',
|
||||
style: Theme.of(context).textTheme.titleLarge?.copyWith(
|
||||
fontWeight: FontWeight.bold,
|
||||
),
|
||||
fontWeight: FontWeight.bold,
|
||||
),
|
||||
),
|
||||
const SizedBox(height: 12),
|
||||
Row(
|
||||
children: [
|
||||
// --- BOUTON SESSIONS (Redirige vers Statistiques) ---
|
||||
Expanded(
|
||||
child: StatsCard(
|
||||
icon: Icons.assessment,
|
||||
title: 'Sessions',
|
||||
value: '${_stats!['totalSessions']}',
|
||||
color: AppTheme.primaryColor,
|
||||
child: InkWell(
|
||||
onTap: () => _navigateToStatistics(context),
|
||||
borderRadius: BorderRadius.circular(AppConstants.borderRadius),
|
||||
child: StatsCard(
|
||||
icon: Icons.assessment,
|
||||
title: 'Sessions',
|
||||
value: '${_stats!['totalSessions']}',
|
||||
color: AppTheme.primaryColor,
|
||||
),
|
||||
),
|
||||
),
|
||||
const SizedBox(width: 12),
|
||||
// Ce bouton reste statique (ou tu peux ajouter une action)
|
||||
Expanded(
|
||||
child: StatsCard(
|
||||
icon: Icons.gps_fixed,
|
||||
@@ -176,15 +182,21 @@ class _HomeScreenState extends State<HomeScreen> {
|
||||
const SizedBox(height: 12),
|
||||
Row(
|
||||
children: [
|
||||
// --- BOUTON SCORE MOYEN (Redirige vers Historique) ---
|
||||
Expanded(
|
||||
child: StatsCard(
|
||||
icon: Icons.trending_up,
|
||||
title: 'Score Moyen',
|
||||
value: (_stats!['averageScore'] as double).toStringAsFixed(1),
|
||||
color: AppTheme.warningColor,
|
||||
child: InkWell(
|
||||
onTap: () => _navigateToHistory(context),
|
||||
borderRadius: BorderRadius.circular(AppConstants.borderRadius),
|
||||
child: StatsCard(
|
||||
icon: Icons.trending_up,
|
||||
title: 'Historique',
|
||||
value: (_stats!['averageScore'] as double).toStringAsFixed(1),
|
||||
color: AppTheme.warningColor,
|
||||
),
|
||||
),
|
||||
),
|
||||
const SizedBox(width: 12),
|
||||
// Ce bouton reste statique
|
||||
Expanded(
|
||||
child: StatsCard(
|
||||
icon: Icons.emoji_events,
|
||||
|
||||
@@ -5,7 +5,6 @@
|
||||
/// écart-type et distribution régionale des tirs.
|
||||
library;
|
||||
|
||||
import 'package:flutter/foundation.dart';
|
||||
import 'package:flutter/material.dart';
|
||||
import 'package:provider/provider.dart';
|
||||
import '../../core/constants/app_constants.dart';
|
||||
@@ -69,28 +68,38 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
}
|
||||
|
||||
void _calculateStats() {
|
||||
debugPrint('Calculating stats for ${_allSessions.length} sessions, period: $_selectedPeriod');
|
||||
debugPrint(
|
||||
'Calculating stats for ${_allSessions.length} sessions, period: $_selectedPeriod',
|
||||
);
|
||||
for (final session in _allSessions) {
|
||||
debugPrint(' Session: ${session.id}, shots: ${session.shots.length}, date: ${session.createdAt}');
|
||||
debugPrint(
|
||||
' Session: ${session.id}, shots: ${session.shots.length}, date: ${session.createdAt}',
|
||||
);
|
||||
}
|
||||
_statistics = _statisticsService.calculateStatistics(
|
||||
_allSessions,
|
||||
period: _selectedPeriod,
|
||||
);
|
||||
debugPrint('Statistics result: totalShots=${_statistics?.totalShots}, totalScore=${_statistics?.totalScore}');
|
||||
debugPrint(
|
||||
'Statistics result: totalShots=${_statistics?.totalShots}, totalScore=${_statistics?.totalScore}',
|
||||
);
|
||||
}
|
||||
|
||||
@override
|
||||
Widget build(BuildContext context) {
|
||||
return Scaffold(
|
||||
appBar: AppBar(
|
||||
title: Text(widget.singleSession != null ? 'Statistiques Session' : 'Statistiques'),
|
||||
title: Text(
|
||||
widget.singleSession != null
|
||||
? 'Statistiques Session'
|
||||
: 'Statistiques',
|
||||
),
|
||||
),
|
||||
body: _isLoading
|
||||
? const Center(child: CircularProgressIndicator())
|
||||
: _statistics == null || _statistics!.totalShots == 0
|
||||
? _buildEmptyState()
|
||||
: _buildStatistics(),
|
||||
? _buildEmptyState()
|
||||
: _buildStatistics(),
|
||||
);
|
||||
}
|
||||
|
||||
@@ -101,7 +110,11 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
child: Column(
|
||||
mainAxisAlignment: MainAxisAlignment.center,
|
||||
children: [
|
||||
Icon(Icons.analytics_outlined, size: 64, color: Colors.grey.shade400),
|
||||
Icon(
|
||||
Icons.analytics_outlined,
|
||||
size: 64,
|
||||
color: Colors.grey.shade400,
|
||||
),
|
||||
const SizedBox(height: 16),
|
||||
Text(
|
||||
'Aucune donnee disponible',
|
||||
@@ -292,11 +305,17 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
children: [
|
||||
Padding(
|
||||
padding: const EdgeInsets.only(left: 16),
|
||||
child: Text('Peu', style: TextStyle(fontSize: 12, color: Colors.grey.shade600)),
|
||||
child: Text(
|
||||
'Peu',
|
||||
style: TextStyle(fontSize: 12, color: Colors.grey.shade600),
|
||||
),
|
||||
),
|
||||
Padding(
|
||||
padding: const EdgeInsets.only(right: 16),
|
||||
child: Text('Beaucoup', style: TextStyle(fontSize: 12, color: Colors.grey.shade600)),
|
||||
child: Text(
|
||||
'Beaucoup',
|
||||
style: TextStyle(fontSize: 12, color: Colors.grey.shade600),
|
||||
),
|
||||
),
|
||||
],
|
||||
),
|
||||
@@ -306,28 +325,6 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
);
|
||||
}
|
||||
|
||||
Widget _buildLegendItem(Color color, String label) {
|
||||
return Padding(
|
||||
padding: const EdgeInsets.symmetric(horizontal: 4),
|
||||
child: Row(
|
||||
mainAxisSize: MainAxisSize.min,
|
||||
children: [
|
||||
Container(
|
||||
width: 16,
|
||||
height: 16,
|
||||
decoration: BoxDecoration(
|
||||
color: color,
|
||||
borderRadius: BorderRadius.circular(2),
|
||||
border: Border.all(color: Colors.grey.shade400),
|
||||
),
|
||||
),
|
||||
const SizedBox(width: 4),
|
||||
Text(label, style: const TextStyle(fontSize: 10)),
|
||||
],
|
||||
),
|
||||
);
|
||||
}
|
||||
|
||||
Widget _buildPrecisionSection() {
|
||||
final precision = _statistics!.precision;
|
||||
|
||||
@@ -339,7 +336,10 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
children: [
|
||||
Row(
|
||||
children: [
|
||||
const Icon(Icons.center_focus_strong, color: AppTheme.successColor),
|
||||
const Icon(
|
||||
Icons.center_focus_strong,
|
||||
color: AppTheme.successColor,
|
||||
),
|
||||
const SizedBox(width: 8),
|
||||
const Text(
|
||||
'Precision',
|
||||
@@ -368,12 +368,18 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
],
|
||||
),
|
||||
const Divider(height: 32),
|
||||
_buildStatRow('Distance moyenne du centre',
|
||||
'${(precision.avgDistanceFromCenter * 100).toStringAsFixed(1)}%'),
|
||||
_buildStatRow('Diametre de groupement',
|
||||
'${(precision.groupingDiameter * 100).toStringAsFixed(1)}%'),
|
||||
_buildStatRow('Score moyen',
|
||||
_statistics!.avgScore.toStringAsFixed(2)),
|
||||
_buildStatRow(
|
||||
'Distance moyenne du centre',
|
||||
'${(precision.avgDistanceFromCenter * 100).toStringAsFixed(1)}%',
|
||||
),
|
||||
_buildStatRow(
|
||||
'Diametre de groupement',
|
||||
'${(precision.groupingDiameter * 100).toStringAsFixed(1)}%',
|
||||
),
|
||||
_buildStatRow(
|
||||
'Score moyen',
|
||||
_statistics!.avgScore.toStringAsFixed(2),
|
||||
),
|
||||
_buildStatRow('Meilleur score', '${_statistics!.maxScore}'),
|
||||
_buildStatRow('Plus bas score', '${_statistics!.minScore}'),
|
||||
],
|
||||
@@ -386,8 +392,8 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
final color = value > 70
|
||||
? AppTheme.successColor
|
||||
: value > 40
|
||||
? AppTheme.warningColor
|
||||
: AppTheme.errorColor;
|
||||
? AppTheme.warningColor
|
||||
: AppTheme.errorColor;
|
||||
|
||||
return Column(
|
||||
children: [
|
||||
@@ -405,7 +411,7 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
),
|
||||
),
|
||||
Text(
|
||||
'${value.toStringAsFixed(0)}',
|
||||
value.toStringAsFixed(0),
|
||||
style: TextStyle(
|
||||
fontSize: 20,
|
||||
fontWeight: FontWeight.bold,
|
||||
@@ -415,10 +421,7 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
],
|
||||
),
|
||||
const SizedBox(height: 8),
|
||||
Text(
|
||||
title,
|
||||
style: const TextStyle(fontWeight: FontWeight.bold),
|
||||
),
|
||||
Text(title, style: const TextStyle(fontWeight: FontWeight.bold)),
|
||||
Text(
|
||||
subtitle,
|
||||
style: TextStyle(fontSize: 10, color: Colors.grey.shade600),
|
||||
@@ -439,7 +442,10 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
children: [
|
||||
Row(
|
||||
children: [
|
||||
const Icon(Icons.stacked_line_chart, color: AppTheme.warningColor),
|
||||
const Icon(
|
||||
Icons.stacked_line_chart,
|
||||
color: AppTheme.warningColor,
|
||||
),
|
||||
const SizedBox(width: 8),
|
||||
const Text(
|
||||
'Ecart Type',
|
||||
@@ -453,21 +459,32 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
style: TextStyle(color: Colors.grey.shade600, fontSize: 12),
|
||||
),
|
||||
const SizedBox(height: 16),
|
||||
_buildStatRow('Ecart type X (horizontal)',
|
||||
'${(stdDev.stdDevX * 100).toStringAsFixed(2)}%'),
|
||||
_buildStatRow('Ecart type Y (vertical)',
|
||||
'${(stdDev.stdDevY * 100).toStringAsFixed(2)}%'),
|
||||
_buildStatRow('Ecart type radial',
|
||||
'${(stdDev.stdDevRadial * 100).toStringAsFixed(2)}%'),
|
||||
_buildStatRow('Ecart type score',
|
||||
stdDev.stdDevScore.toStringAsFixed(2)),
|
||||
_buildStatRow(
|
||||
'Ecart type X (horizontal)',
|
||||
'${(stdDev.stdDevX * 100).toStringAsFixed(2)}%',
|
||||
),
|
||||
_buildStatRow(
|
||||
'Ecart type Y (vertical)',
|
||||
'${(stdDev.stdDevY * 100).toStringAsFixed(2)}%',
|
||||
),
|
||||
_buildStatRow(
|
||||
'Ecart type radial',
|
||||
'${(stdDev.stdDevRadial * 100).toStringAsFixed(2)}%',
|
||||
),
|
||||
_buildStatRow(
|
||||
'Ecart type score',
|
||||
stdDev.stdDevScore.toStringAsFixed(2),
|
||||
),
|
||||
const Divider(height: 24),
|
||||
_buildStatRow('Position moyenne X',
|
||||
'${(stdDev.meanX * 100).toStringAsFixed(1)}%'),
|
||||
_buildStatRow('Position moyenne Y',
|
||||
'${(stdDev.meanY * 100).toStringAsFixed(1)}%'),
|
||||
_buildStatRow('Score moyen',
|
||||
stdDev.meanScore.toStringAsFixed(2)),
|
||||
_buildStatRow(
|
||||
'Position moyenne X',
|
||||
'${(stdDev.meanX * 100).toStringAsFixed(1)}%',
|
||||
),
|
||||
_buildStatRow(
|
||||
'Position moyenne Y',
|
||||
'${(stdDev.meanY * 100).toStringAsFixed(1)}%',
|
||||
),
|
||||
_buildStatRow('Score moyen', stdDev.meanScore.toStringAsFixed(2)),
|
||||
],
|
||||
),
|
||||
),
|
||||
@@ -504,7 +521,10 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
),
|
||||
child: Row(
|
||||
children: [
|
||||
const Icon(Icons.compass_calibration, color: AppTheme.primaryColor),
|
||||
const Icon(
|
||||
Icons.compass_calibration,
|
||||
color: AppTheme.primaryColor,
|
||||
),
|
||||
const SizedBox(width: 12),
|
||||
Expanded(
|
||||
child: Column(
|
||||
@@ -536,7 +556,10 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
),
|
||||
child: Row(
|
||||
children: [
|
||||
const Icon(Icons.warning_amber, color: AppTheme.warningColor),
|
||||
const Icon(
|
||||
Icons.warning_amber,
|
||||
color: AppTheme.warningColor,
|
||||
),
|
||||
const SizedBox(width: 12),
|
||||
Expanded(
|
||||
child: Column(
|
||||
@@ -556,7 +579,10 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
const SizedBox(height: 16),
|
||||
|
||||
// Sector distribution
|
||||
const Text('Repartition par secteur:', style: TextStyle(fontWeight: FontWeight.bold)),
|
||||
const Text(
|
||||
'Repartition par secteur:',
|
||||
style: TextStyle(fontWeight: FontWeight.bold),
|
||||
),
|
||||
const SizedBox(height: 8),
|
||||
Wrap(
|
||||
spacing: 8,
|
||||
@@ -572,7 +598,10 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
const SizedBox(height: 16),
|
||||
|
||||
// Quadrant distribution
|
||||
const Text('Repartition par quadrant:', style: TextStyle(fontWeight: FontWeight.bold)),
|
||||
const Text(
|
||||
'Repartition par quadrant:',
|
||||
style: TextStyle(fontWeight: FontWeight.bold),
|
||||
),
|
||||
const SizedBox(height: 8),
|
||||
_buildQuadrantGrid(regional.quadrantDistribution),
|
||||
],
|
||||
@@ -598,7 +627,9 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
return Container(
|
||||
padding: const EdgeInsets.symmetric(horizontal: 12, vertical: 6),
|
||||
decoration: BoxDecoration(
|
||||
color: count > 0 ? AppTheme.primaryColor.withValues(alpha: 0.1) : Colors.grey.shade100,
|
||||
color: count > 0
|
||||
? AppTheme.primaryColor.withValues(alpha: 0.1)
|
||||
: Colors.grey.shade100,
|
||||
borderRadius: BorderRadius.circular(16),
|
||||
border: Border.all(
|
||||
color: count > 0 ? AppTheme.primaryColor : Colors.grey.shade300,
|
||||
@@ -649,10 +680,7 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
|
||||
children: [
|
||||
Text(
|
||||
'$count',
|
||||
style: const TextStyle(
|
||||
fontWeight: FontWeight.bold,
|
||||
fontSize: 24,
|
||||
),
|
||||
style: const TextStyle(fontWeight: FontWeight.bold, fontSize: 24),
|
||||
),
|
||||
Text(
|
||||
'${percentage.toStringAsFixed(0)}%',
|
||||
@@ -712,10 +740,7 @@ class _StatCard extends StatelessWidget {
|
||||
color: color,
|
||||
),
|
||||
),
|
||||
Text(
|
||||
title,
|
||||
style: TextStyle(color: Colors.grey.shade600),
|
||||
),
|
||||
Text(title, style: TextStyle(color: Colors.grey.shade600)),
|
||||
],
|
||||
),
|
||||
),
|
||||
|
||||
@@ -10,6 +10,7 @@ import 'services/target_detection_service.dart';
|
||||
import 'services/score_calculator_service.dart';
|
||||
import 'services/grouping_analyzer_service.dart';
|
||||
import 'services/image_processing_service.dart';
|
||||
import 'services/yolo_impact_detection_service.dart';
|
||||
|
||||
void main() async {
|
||||
WidgetsFlutterBinding.ensureInitialized();
|
||||
@@ -33,9 +34,13 @@ void main() async {
|
||||
Provider<ImageProcessingService>(
|
||||
create: (_) => ImageProcessingService(),
|
||||
),
|
||||
Provider<YOLOImpactDetectionService>(
|
||||
create: (_) => YOLOImpactDetectionService(),
|
||||
),
|
||||
Provider<TargetDetectionService>(
|
||||
create: (context) => TargetDetectionService(
|
||||
imageProcessingService: context.read<ImageProcessingService>(),
|
||||
yoloService: context.read<YOLOImpactDetectionService>(),
|
||||
),
|
||||
),
|
||||
Provider<ScoreCalculatorService>(
|
||||
@@ -44,9 +49,7 @@ void main() async {
|
||||
Provider<GroupingAnalyzerService>(
|
||||
create: (_) => GroupingAnalyzerService(),
|
||||
),
|
||||
Provider<SessionRepository>(
|
||||
create: (_) => SessionRepository(),
|
||||
),
|
||||
Provider<SessionRepository>(create: (_) => SessionRepository()),
|
||||
],
|
||||
child: const BullyApp(),
|
||||
),
|
||||
|
||||
@@ -8,6 +8,7 @@ library;
|
||||
import 'dart:io';
|
||||
import 'dart:math' as math;
|
||||
import 'package:image/image.dart' as img;
|
||||
import 'package:opencv_dart/opencv_dart.dart' as cv;
|
||||
import 'package:path_provider/path_provider.dart';
|
||||
|
||||
/// Paramètres de distorsion calculés à partir de la calibration
|
||||
@@ -281,16 +282,56 @@ class DistortionCorrectionService {
|
||||
final p11 = image.getPixel(x1, y1);
|
||||
|
||||
// Interpoler chaque canal
|
||||
final r = _lerp2D(p00.r.toDouble(), p10.r.toDouble(), p01.r.toDouble(), p11.r.toDouble(), wx, wy);
|
||||
final g = _lerp2D(p00.g.toDouble(), p10.g.toDouble(), p01.g.toDouble(), p11.g.toDouble(), wx, wy);
|
||||
final b = _lerp2D(p00.b.toDouble(), p10.b.toDouble(), p01.b.toDouble(), p11.b.toDouble(), wx, wy);
|
||||
final a = _lerp2D(p00.a.toDouble(), p10.a.toDouble(), p01.a.toDouble(), p11.a.toDouble(), wx, wy);
|
||||
final r = _lerp2D(
|
||||
p00.r.toDouble(),
|
||||
p10.r.toDouble(),
|
||||
p01.r.toDouble(),
|
||||
p11.r.toDouble(),
|
||||
wx,
|
||||
wy,
|
||||
);
|
||||
final g = _lerp2D(
|
||||
p00.g.toDouble(),
|
||||
p10.g.toDouble(),
|
||||
p01.g.toDouble(),
|
||||
p11.g.toDouble(),
|
||||
wx,
|
||||
wy,
|
||||
);
|
||||
final b = _lerp2D(
|
||||
p00.b.toDouble(),
|
||||
p10.b.toDouble(),
|
||||
p01.b.toDouble(),
|
||||
p11.b.toDouble(),
|
||||
wx,
|
||||
wy,
|
||||
);
|
||||
final a = _lerp2D(
|
||||
p00.a.toDouble(),
|
||||
p10.a.toDouble(),
|
||||
p01.a.toDouble(),
|
||||
p11.a.toDouble(),
|
||||
wx,
|
||||
wy,
|
||||
);
|
||||
|
||||
return img.ColorRgba8(r.round().clamp(0, 255), g.round().clamp(0, 255), b.round().clamp(0, 255), a.round().clamp(0, 255));
|
||||
return img.ColorRgba8(
|
||||
r.round().clamp(0, 255),
|
||||
g.round().clamp(0, 255),
|
||||
b.round().clamp(0, 255),
|
||||
a.round().clamp(0, 255),
|
||||
);
|
||||
}
|
||||
|
||||
/// Interpolation linéaire 2D
|
||||
double _lerp2D(double v00, double v10, double v01, double v11, double wx, double wy) {
|
||||
double _lerp2D(
|
||||
double v00,
|
||||
double v10,
|
||||
double v01,
|
||||
double v11,
|
||||
double wx,
|
||||
double wy,
|
||||
) {
|
||||
final top = v00 * (1 - wx) + v10 * wx;
|
||||
final bottom = v01 * (1 - wx) + v11 * wx;
|
||||
return top * (1 - wy) + bottom * wy;
|
||||
@@ -320,7 +361,9 @@ class DistortionCorrectionService {
|
||||
final height = image.height;
|
||||
|
||||
// Convertir les coordonnées normalisées en pixels
|
||||
final srcCorners = corners.map((c) => (x: c.x * width, y: c.y * height)).toList();
|
||||
final srcCorners = corners
|
||||
.map((c) => (x: c.x * width, y: c.y * height))
|
||||
.toList();
|
||||
|
||||
// Calculer la taille du rectangle destination
|
||||
// On prend la moyenne des largeurs et hauteurs
|
||||
@@ -336,20 +379,21 @@ class DistortionCorrectionService {
|
||||
final result = img.Image(width: dstWidth, height: dstHeight);
|
||||
|
||||
// Calculer la matrice de transformation perspective
|
||||
final matrix = _computePerspectiveMatrix(
|
||||
srcCorners,
|
||||
[
|
||||
(x: 0.0, y: 0.0),
|
||||
(x: dstWidth.toDouble(), y: 0.0),
|
||||
(x: dstWidth.toDouble(), y: dstHeight.toDouble()),
|
||||
(x: 0.0, y: dstHeight.toDouble()),
|
||||
],
|
||||
);
|
||||
final matrix = _computePerspectiveMatrix(srcCorners, [
|
||||
(x: 0.0, y: 0.0),
|
||||
(x: dstWidth.toDouble(), y: 0.0),
|
||||
(x: dstWidth.toDouble(), y: dstHeight.toDouble()),
|
||||
(x: 0.0, y: dstHeight.toDouble()),
|
||||
]);
|
||||
|
||||
// Appliquer la transformation
|
||||
for (int y = 0; y < dstHeight; y++) {
|
||||
for (int x = 0; x < dstWidth; x++) {
|
||||
final src = _applyPerspectiveTransform(matrix, x.toDouble(), y.toDouble());
|
||||
final src = _applyPerspectiveTransform(
|
||||
matrix,
|
||||
x.toDouble(),
|
||||
y.toDouble(),
|
||||
);
|
||||
|
||||
if (src.x >= 0 && src.x < width && src.y >= 0 && src.y < height) {
|
||||
final pixel = _bilinearInterpolate(image, src.x, src.y);
|
||||
@@ -402,16 +446,74 @@ class DistortionCorrectionService {
|
||||
return h;
|
||||
}
|
||||
|
||||
/// Résout le système linéaire pour trouver la matrice d'homographie 3x3.
|
||||
/// Utilise l'élimination de Gauss-Jordan avec pivot partiel pour la stabilité.
|
||||
List<double> _solveHomography(List<List<double>> a) {
|
||||
// Implémentation simplifiée - normalisation et résolution
|
||||
// En pratique, on devrait utiliser une vraie décomposition SVD
|
||||
// Le système 'a' est de taille 8x9 (8 équations, 9 inconnues).
|
||||
// On fixe h8 = 1.0 pour résoudre le système, ce qui nous donne un système 8x8.
|
||||
final int n = 8;
|
||||
final List<List<double>> matrix = List.generate(
|
||||
n,
|
||||
(i) => List<double>.from(a[i]),
|
||||
);
|
||||
|
||||
// Pour l'instant, retourner une matrice identité
|
||||
// TODO: Implémenter une vraie résolution
|
||||
return [1, 0, 0, 0, 1, 0, 0, 0, 1];
|
||||
// Vecteur B (les constantes de l'autre côté de l'égalité)
|
||||
// Dans DLT, -h8 * dx (ou dy) devient le terme constant.
|
||||
final List<double> b = List.generate(n, (i) => -matrix[i][8]);
|
||||
|
||||
// Élimination de Gauss-Jordan
|
||||
for (int i = 0; i < n; i++) {
|
||||
// Recherche du pivot (valeur maximale dans la colonne pour limiter les erreurs)
|
||||
int pivot = i;
|
||||
for (int j = i + 1; j < n; j++) {
|
||||
if (matrix[j][i].abs() > matrix[pivot][i].abs()) {
|
||||
pivot = j;
|
||||
}
|
||||
}
|
||||
|
||||
// Échange des lignes (si nécessaire)
|
||||
final List<double> tempRow = matrix[i];
|
||||
matrix[i] = matrix[pivot];
|
||||
matrix[pivot] = tempRow;
|
||||
|
||||
final double tempB = b[i];
|
||||
b[i] = b[pivot];
|
||||
b[pivot] = tempB;
|
||||
|
||||
// Vérification de la singularité (division par zéro impossible)
|
||||
if (matrix[i][i].abs() < 1e-10) {
|
||||
return [1, 0, 0, 0, 1, 0, 0, 0, 1]; // Retourne identité si échec
|
||||
}
|
||||
|
||||
// Normalisation de la ligne pivot
|
||||
for (int j = i + 1; j < n; j++) {
|
||||
final double factor = matrix[j][i] / matrix[i][i];
|
||||
b[j] -= factor * b[i];
|
||||
for (int k = i; k < n; k++) {
|
||||
matrix[j][k] -= factor * matrix[i][k];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Substitution arrière
|
||||
final List<double> h = List.filled(9, 0.0);
|
||||
for (int i = n - 1; i >= 0; i--) {
|
||||
double sum = 0.0;
|
||||
for (int j = i + 1; j < n; j++) {
|
||||
sum += matrix[i][j] * h[j];
|
||||
}
|
||||
h[i] = (b[i] - sum) / matrix[i][i];
|
||||
}
|
||||
|
||||
h[8] = 1.0; // Normalisation finale
|
||||
return h;
|
||||
}
|
||||
|
||||
({double x, double y}) _applyPerspectiveTransform(List<double> h, double x, double y) {
|
||||
({double x, double y}) _applyPerspectiveTransform(
|
||||
List<double> h,
|
||||
double x,
|
||||
double y,
|
||||
) {
|
||||
final w = h[6] * x + h[7] * y + h[8];
|
||||
if (w.abs() < 1e-10) {
|
||||
return (x: x, y: y);
|
||||
@@ -420,4 +522,553 @@ class DistortionCorrectionService {
|
||||
final ny = (h[3] * x + h[4] * y + h[5]) / w;
|
||||
return (x: nx, y: ny);
|
||||
}
|
||||
|
||||
/// Corrige la perspective en se basant sur la détection de cercles (ellipses)
|
||||
/// dans l'image.
|
||||
///
|
||||
/// Cette méthode tente de détecter l'ellipse la plus proéminente (la cible)
|
||||
/// et calcule une transformation pour la rendre parfaitement circulaire.
|
||||
Future<String> correctPerspectiveUsingCircles(String imagePath) async {
|
||||
try {
|
||||
// 1. Charger l'image avec OpenCV
|
||||
final src = cv.imread(imagePath, flags: cv.IMREAD_COLOR);
|
||||
if (src.isEmpty) throw Exception("Impossible de charger l'image");
|
||||
|
||||
// 2. Prétraitement
|
||||
final gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY);
|
||||
final blurred = cv.gaussianBlur(gray, (5, 5), 0);
|
||||
|
||||
// Canny edge detector avec seuil adaptatif (Otsu)
|
||||
final thresh = cv.threshold(
|
||||
blurred,
|
||||
0,
|
||||
255,
|
||||
cv.THRESH_BINARY | cv.THRESH_OTSU,
|
||||
);
|
||||
final edges = cv.canny(blurred, thresh.$1 * 0.5, thresh.$1);
|
||||
|
||||
// 3. Trouver les contours
|
||||
final contoursResult = cv.findContours(
|
||||
edges,
|
||||
cv.RETR_EXTERNAL,
|
||||
cv.CHAIN_APPROX_SIMPLE,
|
||||
);
|
||||
final contours = contoursResult.$1;
|
||||
|
||||
if (contours.isEmpty) return imagePath; // Pas de contours trouvés
|
||||
|
||||
// 4. Trouver le meilleur candidat ellipse
|
||||
cv.RotatedRect? bestEllipse;
|
||||
double maxArea = 0;
|
||||
|
||||
for (final contour in contours) {
|
||||
if (contour.length < 5)
|
||||
continue; // fitEllipse nécessite au moins 5 points
|
||||
|
||||
final area = cv.contourArea(contour);
|
||||
if (area < 1000) continue; // Ignorer les trop petits bruits
|
||||
|
||||
final ellipse = cv.fitEllipse(contour);
|
||||
|
||||
// Critère de sélection: on cherche la plus grande ellipse qui est proche d'un cercle
|
||||
// Mais comme on veut corriger la distorsion, elle PEUT être aplatie.
|
||||
// Donc on prend juste la plus grande ellipse raisonnablement centrée.
|
||||
if (area > maxArea) {
|
||||
maxArea = area;
|
||||
bestEllipse = ellipse;
|
||||
}
|
||||
}
|
||||
|
||||
if (bestEllipse == null) return imagePath;
|
||||
|
||||
// 5. Calculer la transformation perspective
|
||||
// L'idée est de mapper les 4 sommets de l'ellipse détectée vers un cercle parfait.
|
||||
// Ou plus simplement, mapper le rectangle englobant de l'ellipse vers un carré.
|
||||
|
||||
// Points source: les 4 coins du rotated rect de l'ellipse
|
||||
// Note: opencv_dart RotatedRect points() non dispo directement?
|
||||
// On peut utiliser boxPoints(ellipse)
|
||||
final boxPoints = cv.boxPoints(bestEllipse);
|
||||
// boxPoints returns Mat (4x2 float32)
|
||||
|
||||
// Extraire les 4 points
|
||||
final List<cv.Point> srcPoints = [];
|
||||
|
||||
for (int i = 0; i < boxPoints.length; i++) {
|
||||
// On accède directement au point à l'index i
|
||||
final point2f = boxPoints[i];
|
||||
|
||||
// On convertit les coordonnées float en int pour cv.Point
|
||||
srcPoints.add(cv.Point(point2f.x.toInt(), point2f.y.toInt()));
|
||||
}
|
||||
|
||||
// Trier les points pour avoir: TL, TR, BR, BL
|
||||
_sortPoints(srcPoints);
|
||||
|
||||
// Dimensions cibles
|
||||
final side = math
|
||||
.max(bestEllipse.size.width, bestEllipse.size.height)
|
||||
.toInt();
|
||||
|
||||
final List<cv.Point> dstPoints = [
|
||||
cv.Point(0, 0),
|
||||
cv.Point(side, 0),
|
||||
cv.Point(side, side),
|
||||
cv.Point(0, side),
|
||||
];
|
||||
|
||||
// Matrice de perspective
|
||||
final M = cv.getPerspectiveTransform(
|
||||
cv.VecPoint.fromList(srcPoints),
|
||||
cv.VecPoint.fromList(dstPoints),
|
||||
);
|
||||
|
||||
// 6. Warper l'image
|
||||
final corrected = cv.warpPerspective(src, M, (side, side));
|
||||
|
||||
// 7. Sauvegarder
|
||||
final tempDir = await getTemporaryDirectory();
|
||||
final timestamp = DateTime.now().millisecondsSinceEpoch;
|
||||
final outputPath = '${tempDir.path}/corrected_circle_$timestamp.jpg';
|
||||
|
||||
cv.imwrite(outputPath, corrected);
|
||||
|
||||
return outputPath;
|
||||
} catch (e) {
|
||||
// En cas d'erreur, retourner l'image originale
|
||||
print('Erreur correction perspective cercles: $e');
|
||||
return imagePath;
|
||||
}
|
||||
}
|
||||
|
||||
/// Trie les points dans l'ordre: Top-Left, Top-Right, Bottom-Right, Bottom-Left
|
||||
void _sortPoints(List<cv.Point> points) {
|
||||
// Calculer le centre de gravité
|
||||
double cx = 0;
|
||||
double cy = 0;
|
||||
for (final p in points) {
|
||||
cx += p.x;
|
||||
cy += p.y;
|
||||
}
|
||||
cx /= points.length;
|
||||
cy /= points.length;
|
||||
|
||||
points.sort((a, b) {
|
||||
// Trier par angle autour du centre
|
||||
final angleA = math.atan2(a.y - cy, a.x - cx);
|
||||
final angleB = math.atan2(b.y - cy, b.x - cx);
|
||||
return angleA.compareTo(angleB);
|
||||
});
|
||||
|
||||
// Re-trier pour être sûr:
|
||||
points.sort((a, b) => (a.y + a.x).compareTo(b.y + b.x));
|
||||
final tl = points[0];
|
||||
final br = points[3];
|
||||
|
||||
// Reste tr et bl
|
||||
final remaining = [points[1], points[2]];
|
||||
remaining.sort((a, b) => a.x.compareTo(b.x));
|
||||
final bl = remaining[0];
|
||||
final tr = remaining[1];
|
||||
|
||||
points[0] = tl;
|
||||
points[1] = tr;
|
||||
points[2] = br;
|
||||
points[3] = bl;
|
||||
}
|
||||
|
||||
/// Corrige la perspective en reformant le plus grand ovale (ellipse) en un cercle parfait,
|
||||
/// sans recadrer agressivement l'image entière.
|
||||
Future<String> correctPerspectiveUsingOvals(String imagePath) async {
|
||||
try {
|
||||
final src = cv.imread(imagePath, flags: cv.IMREAD_COLOR);
|
||||
if (src.isEmpty) throw Exception("Impossible de charger l'image");
|
||||
|
||||
final gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY);
|
||||
final blurred = cv.gaussianBlur(gray, (5, 5), 0);
|
||||
|
||||
final thresh = cv.threshold(
|
||||
blurred,
|
||||
0,
|
||||
255,
|
||||
cv.THRESH_BINARY | cv.THRESH_OTSU,
|
||||
);
|
||||
final edges = cv.canny(blurred, thresh.$1 * 0.5, thresh.$1);
|
||||
|
||||
final contoursResult = cv.findContours(
|
||||
edges,
|
||||
cv.RETR_EXTERNAL,
|
||||
cv.CHAIN_APPROX_SIMPLE,
|
||||
);
|
||||
final contours = contoursResult.$1;
|
||||
|
||||
if (contours.isEmpty) return imagePath;
|
||||
|
||||
cv.RotatedRect? bestEllipse;
|
||||
double maxArea = 0;
|
||||
|
||||
for (final contour in contours) {
|
||||
if (contour.length < 5) continue;
|
||||
final area = cv.contourArea(contour);
|
||||
if (area < 1000) continue;
|
||||
|
||||
final ellipse = cv.fitEllipse(contour);
|
||||
if (area > maxArea) {
|
||||
maxArea = area;
|
||||
bestEllipse = ellipse;
|
||||
}
|
||||
}
|
||||
|
||||
if (bestEllipse == null) return imagePath;
|
||||
|
||||
// The goal here is to morph the bestEllipse into a perfect circle, while
|
||||
// keeping the image the same size and the center of the ellipse in the same place.
|
||||
// We'll use the average of the width and height (or max) to define the target circle
|
||||
final targetRadius =
|
||||
math.max(bestEllipse.size.width, bestEllipse.size.height) / 2.0;
|
||||
|
||||
// Extract the 4 bounding box points of the ellipse
|
||||
final boxPoints = cv.boxPoints(bestEllipse);
|
||||
final List<cv.Point> srcPoints = [];
|
||||
for (int i = 0; i < boxPoints.length; i++) {
|
||||
srcPoints.add(cv.Point(boxPoints[i].x.toInt(), boxPoints[i].y.toInt()));
|
||||
}
|
||||
_sortPoints(srcPoints);
|
||||
|
||||
// Calculate the size of the perfectly squared output image
|
||||
final int side = (targetRadius * 2).toInt();
|
||||
|
||||
final List<cv.Point> dstPoints = [
|
||||
cv.Point(0, 0), // Top-Left
|
||||
cv.Point(side, 0), // Top-Right
|
||||
cv.Point(side, side), // Bottom-Right
|
||||
cv.Point(0, side), // Bottom-Left
|
||||
];
|
||||
|
||||
// Morph the target region into a perfect square, cropping the rest of the image
|
||||
final M = cv.getPerspectiveTransform(
|
||||
cv.VecPoint.fromList(srcPoints),
|
||||
cv.VecPoint.fromList(dstPoints),
|
||||
);
|
||||
|
||||
final corrected = cv.warpPerspective(src, M, (side, side));
|
||||
|
||||
final tempDir = await getTemporaryDirectory();
|
||||
final timestamp = DateTime.now().millisecondsSinceEpoch;
|
||||
final outputPath = '${tempDir.path}/corrected_oval_$timestamp.jpg';
|
||||
|
||||
cv.imwrite(outputPath, corrected);
|
||||
|
||||
return outputPath;
|
||||
} catch (e) {
|
||||
print('Erreur correction perspective ovales: $e');
|
||||
return imagePath;
|
||||
}
|
||||
}
|
||||
|
||||
/// Corrige la distorsion et la profondeur (perspective) en créant un maillage
|
||||
/// basé sur la concentricité des différents cercles de la cible pour trouver le meilleur plan.
|
||||
Future<String> correctPerspectiveWithConcentricMesh(String imagePath) async {
|
||||
try {
|
||||
final src = cv.imread(imagePath, flags: cv.IMREAD_COLOR);
|
||||
if (src.isEmpty) throw Exception("Impossible de charger l'image");
|
||||
|
||||
final gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY);
|
||||
final blurred = cv.gaussianBlur(gray, (5, 5), 0);
|
||||
final thresh = cv.threshold(
|
||||
blurred,
|
||||
0,
|
||||
255,
|
||||
cv.THRESH_BINARY | cv.THRESH_OTSU,
|
||||
);
|
||||
final edges = cv.canny(blurred, thresh.$1 * 0.5, thresh.$1);
|
||||
|
||||
final contoursResult = cv.findContours(
|
||||
edges,
|
||||
cv.RETR_LIST,
|
||||
cv.CHAIN_APPROX_SIMPLE,
|
||||
);
|
||||
final contours = contoursResult.$1;
|
||||
if (contours.isEmpty) return imagePath;
|
||||
|
||||
List<cv.RotatedRect> ellipses = [];
|
||||
for (final contour in contours) {
|
||||
if (contour.length < 5) continue;
|
||||
if (cv.contourArea(contour) < 500) continue;
|
||||
ellipses.add(cv.fitEllipse(contour));
|
||||
}
|
||||
|
||||
if (ellipses.isEmpty) return imagePath;
|
||||
|
||||
// Find the largest ellipse to serve as our central reference
|
||||
ellipses.sort(
|
||||
(a, b) => (b.size.width * b.size.height).compareTo(
|
||||
a.size.width * a.size.height,
|
||||
),
|
||||
);
|
||||
final largestEllipse = ellipses.first;
|
||||
final maxDist =
|
||||
math.max(largestEllipse.size.width, largestEllipse.size.height) *
|
||||
0.15;
|
||||
|
||||
// Group all ellipses that are roughly concentric with the largest one
|
||||
List<cv.RotatedRect> concentricGroup = [];
|
||||
for (final e in ellipses) {
|
||||
final dx = e.center.x - largestEllipse.center.x;
|
||||
final dy = e.center.y - largestEllipse.center.y;
|
||||
if (math.sqrt(dx * dx + dy * dy) < maxDist) {
|
||||
concentricGroup.add(e);
|
||||
}
|
||||
}
|
||||
|
||||
if (concentricGroup.length < 2) {
|
||||
print(
|
||||
"Pas assez de cercles concentriques pour le maillage, utilisation de la méthode simple.",
|
||||
);
|
||||
return await correctPerspectiveUsingOvals(imagePath);
|
||||
}
|
||||
|
||||
final targetRadius =
|
||||
math.max(largestEllipse.size.width, largestEllipse.size.height) / 2.0;
|
||||
final int side = (targetRadius * 2.4).toInt(); // Add padding
|
||||
final double cx = side / 2.0;
|
||||
final double cy = side / 2.0;
|
||||
|
||||
List<cv.Point2f> srcPointsList = [];
|
||||
List<cv.Point2f> dstPointsList = [];
|
||||
|
||||
for (final ellipse in concentricGroup) {
|
||||
final box = cv.boxPoints(ellipse);
|
||||
final m0 = cv.Point2f(
|
||||
(box[0].x + box[1].x) / 2,
|
||||
(box[0].y + box[1].y) / 2,
|
||||
);
|
||||
final m1 = cv.Point2f(
|
||||
(box[1].x + box[2].x) / 2,
|
||||
(box[1].y + box[2].y) / 2,
|
||||
);
|
||||
final m2 = cv.Point2f(
|
||||
(box[2].x + box[3].x) / 2,
|
||||
(box[2].y + box[3].y) / 2,
|
||||
);
|
||||
final m3 = cv.Point2f(
|
||||
(box[3].x + box[0].x) / 2,
|
||||
(box[3].y + box[0].y) / 2,
|
||||
);
|
||||
|
||||
final d02 = math.sqrt(
|
||||
math.pow(m0.x - m2.x, 2) + math.pow(m0.y - m2.y, 2),
|
||||
);
|
||||
final d13 = math.sqrt(
|
||||
math.pow(m1.x - m3.x, 2) + math.pow(m1.y - m3.y, 2),
|
||||
);
|
||||
|
||||
cv.Point2f maj1, maj2, min1, min2;
|
||||
double r;
|
||||
|
||||
if (d02 > d13) {
|
||||
maj1 = m0;
|
||||
maj2 = m2;
|
||||
min1 = m1;
|
||||
min2 = m3;
|
||||
r = d02 / 2.0;
|
||||
} else {
|
||||
maj1 = m1;
|
||||
maj2 = m3;
|
||||
min1 = m0;
|
||||
min2 = m2;
|
||||
r = d13 / 2.0;
|
||||
}
|
||||
|
||||
// Sort maj1 and maj2 so maj1 is left/top
|
||||
if ((maj1.x - maj2.x).abs() > (maj1.y - maj2.y).abs()) {
|
||||
if (maj1.x > maj2.x) {
|
||||
final t = maj1;
|
||||
maj1 = maj2;
|
||||
maj2 = t;
|
||||
}
|
||||
} else {
|
||||
if (maj1.y > maj2.y) {
|
||||
final t = maj1;
|
||||
maj1 = maj2;
|
||||
maj2 = t;
|
||||
}
|
||||
}
|
||||
|
||||
// Sort min1 and min2 so min1 is top/left
|
||||
if ((min1.y - min2.y).abs() > (min1.x - min2.x).abs()) {
|
||||
if (min1.y > min2.y) {
|
||||
final t = min1;
|
||||
min1 = min2;
|
||||
min2 = t;
|
||||
}
|
||||
} else {
|
||||
if (min1.x > min2.x) {
|
||||
final t = min1;
|
||||
min1 = min2;
|
||||
min2 = t;
|
||||
}
|
||||
}
|
||||
|
||||
srcPointsList.addAll([maj1, maj2, min1, min2]);
|
||||
dstPointsList.addAll([
|
||||
cv.Point2f(cx - r, cy),
|
||||
cv.Point2f(cx + r, cy),
|
||||
cv.Point2f(cx, cy - r),
|
||||
cv.Point2f(cx, cy + r),
|
||||
]);
|
||||
|
||||
// Add ellipse centers mapping perfectly to the origin to force concentric depth alignment
|
||||
srcPointsList.add(cv.Point2f(ellipse.center.x, ellipse.center.y));
|
||||
dstPointsList.add(cv.Point2f(cx, cy));
|
||||
}
|
||||
|
||||
// We explicitly convert points to VecPoint to use findHomography standard binding
|
||||
final srcVec = cv.VecPoint.fromList(
|
||||
srcPointsList.map((p) => cv.Point(p.x.toInt(), p.y.toInt())).toList(),
|
||||
);
|
||||
final dstVec = cv.VecPoint.fromList(
|
||||
dstPointsList.map((p) => cv.Point(p.x.toInt(), p.y.toInt())).toList(),
|
||||
);
|
||||
|
||||
final M = cv.findHomography(
|
||||
cv.Mat.fromVec(srcVec),
|
||||
cv.Mat.fromVec(dstVec),
|
||||
method: cv.RANSAC,
|
||||
);
|
||||
|
||||
if (M.isEmpty) {
|
||||
return await correctPerspectiveUsingOvals(imagePath);
|
||||
}
|
||||
|
||||
final corrected = cv.warpPerspective(src, M, (side, side));
|
||||
|
||||
final tempDir = await getTemporaryDirectory();
|
||||
final timestamp = DateTime.now().millisecondsSinceEpoch;
|
||||
final outputPath = '${tempDir.path}/corrected_mesh_$timestamp.jpg';
|
||||
cv.imwrite(outputPath, corrected);
|
||||
|
||||
return outputPath;
|
||||
} catch (e) {
|
||||
print('Erreur correction perspective maillage concentrique: $e');
|
||||
return imagePath;
|
||||
}
|
||||
}
|
||||
|
||||
/// Corrige la perspective en détectant les 4 coins de la feuille (quadrilatère)
|
||||
///
|
||||
/// Cette méthode cherche le plus grand polygone à 4 côtés (le bord du papier)
|
||||
/// et le déforme pour en faire un carré parfait.
|
||||
Future<String> correctPerspectiveUsingQuadrilateral(String imagePath) async {
|
||||
try {
|
||||
final src = cv.imread(imagePath, flags: cv.IMREAD_COLOR);
|
||||
if (src.isEmpty) throw Exception("Impossible de charger l'image");
|
||||
|
||||
final gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY);
|
||||
// Flou plus important pour ignorer les détails internes (cercles, trous)
|
||||
final blurred = cv.gaussianBlur(gray, (9, 9), 0);
|
||||
|
||||
// Canny edge detector
|
||||
final thresh = cv.threshold(
|
||||
blurred,
|
||||
0,
|
||||
255,
|
||||
cv.THRESH_BINARY | cv.THRESH_OTSU,
|
||||
);
|
||||
final edges = cv.canny(blurred, thresh.$1 * 0.5, thresh.$1);
|
||||
|
||||
// Pour la détection de la feuille (les bords peuvent être discontinus à cause de l'éclairage)
|
||||
final kernel = cv.getStructuringElement(cv.MORPH_RECT, (5, 5));
|
||||
final closedEdges = cv.morphologyEx(edges, cv.MORPH_CLOSE, kernel);
|
||||
|
||||
// Find contours
|
||||
final contoursResult = cv.findContours(
|
||||
closedEdges,
|
||||
cv.RETR_EXTERNAL,
|
||||
cv.CHAIN_APPROX_SIMPLE,
|
||||
);
|
||||
final contours = contoursResult.$1;
|
||||
|
||||
cv.VecPoint? bestQuad;
|
||||
double maxArea = 0;
|
||||
|
||||
final minArea = src.rows * src.cols * 0.1; // Au moins 10% de l'image
|
||||
|
||||
for (final contour in contours) {
|
||||
final area = cv.contourArea(contour);
|
||||
if (area < minArea) continue;
|
||||
|
||||
final peri = cv.arcLength(contour, true);
|
||||
// Approximation polygonale (tolérance = 2% à 5% du périmètre)
|
||||
final approx = cv.approxPolyDP(contour, 0.04 * peri, true);
|
||||
|
||||
if (approx.length == 4) {
|
||||
if (area > maxArea) {
|
||||
maxArea = area;
|
||||
bestQuad = approx;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback
|
||||
if (bestQuad == null) {
|
||||
print(
|
||||
"Aucun papier quadrilatère détecté, on utilise les cercles à la place.",
|
||||
);
|
||||
return await correctPerspectiveUsingCircles(imagePath);
|
||||
}
|
||||
|
||||
// Convert to List<cv.Point>
|
||||
final List<cv.Point> srcPoints = [];
|
||||
for (int i = 0; i < bestQuad.length; i++) {
|
||||
srcPoints.add(bestQuad[i]);
|
||||
}
|
||||
|
||||
_sortPoints(srcPoints);
|
||||
|
||||
// Calculate max width and height
|
||||
double widthA = _distanceCV(srcPoints[2], srcPoints[3]);
|
||||
double widthB = _distanceCV(srcPoints[1], srcPoints[0]);
|
||||
int dstWidth = math.max(widthA, widthB).toInt();
|
||||
|
||||
double heightA = _distanceCV(srcPoints[1], srcPoints[2]);
|
||||
double heightB = _distanceCV(srcPoints[0], srcPoints[3]);
|
||||
int dstHeight = math.max(heightA, heightB).toInt();
|
||||
|
||||
// Since standard target paper forms a square, we force the resulting warp to be a perfect square.
|
||||
int side = math.max(dstWidth, dstHeight);
|
||||
|
||||
final List<cv.Point> dstPoints = [
|
||||
cv.Point(0, 0),
|
||||
cv.Point(side, 0),
|
||||
cv.Point(side, side),
|
||||
cv.Point(0, side),
|
||||
];
|
||||
|
||||
final M = cv.getPerspectiveTransform(
|
||||
cv.VecPoint.fromList(srcPoints),
|
||||
cv.VecPoint.fromList(dstPoints),
|
||||
);
|
||||
|
||||
final corrected = cv.warpPerspective(src, M, (side, side));
|
||||
|
||||
final tempDir = await getTemporaryDirectory();
|
||||
final timestamp = DateTime.now().millisecondsSinceEpoch;
|
||||
final outputPath = '${tempDir.path}/corrected_quad_$timestamp.jpg';
|
||||
|
||||
cv.imwrite(outputPath, corrected);
|
||||
|
||||
return outputPath;
|
||||
} catch (e) {
|
||||
print('Erreur correction perspective quadrilatère: $e');
|
||||
// Fallback
|
||||
return await correctPerspectiveUsingCircles(imagePath);
|
||||
}
|
||||
}
|
||||
|
||||
double _distanceCV(cv.Point p1, cv.Point p2) {
|
||||
final dx = p2.x - p1.x;
|
||||
final dy = p2.y - p1.y;
|
||||
return math.sqrt(dx * dx + dy * dy);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -196,10 +196,11 @@ class ImageProcessingService {
|
||||
|
||||
/// Analyze reference impacts to learn their characteristics
|
||||
/// This actually finds the blob at each reference point and extracts its real properties
|
||||
/// AMÉLIORÉ : Recherche plus large et analyse plus robuste
|
||||
ImpactCharacteristics? analyzeReferenceImpacts(
|
||||
String imagePath,
|
||||
List<ReferenceImpact> references, {
|
||||
int searchRadius = 30,
|
||||
int searchRadius = 50, // Augmenté de 30 à 50
|
||||
}) {
|
||||
if (references.length < 2) return null;
|
||||
|
||||
@@ -209,10 +210,10 @@ class ImageProcessingService {
|
||||
final originalImage = img.decodeImage(bytes);
|
||||
if (originalImage == null) return null;
|
||||
|
||||
// Resize for faster processing
|
||||
// Resize for faster processing - taille augmentée
|
||||
img.Image image;
|
||||
double scale = 1.0;
|
||||
final maxDimension = 1000;
|
||||
final maxDimension = 1200; // Augmenté pour plus de précision
|
||||
if (originalImage.width > maxDimension || originalImage.height > maxDimension) {
|
||||
scale = maxDimension / math.max(originalImage.width, originalImage.height);
|
||||
image = img.copyResize(
|
||||
@@ -235,45 +236,67 @@ class ImageProcessingService {
|
||||
final fillRatios = <double>[];
|
||||
final thresholds = <double>[];
|
||||
|
||||
for (final ref in references) {
|
||||
print('Analyzing ${references.length} reference impacts...');
|
||||
|
||||
for (int refIndex = 0; refIndex < references.length; refIndex++) {
|
||||
final ref = references[refIndex];
|
||||
final centerX = (ref.x * width).round().clamp(0, width - 1);
|
||||
final centerY = (ref.y * height).round().clamp(0, height - 1);
|
||||
|
||||
// Find the darkest point in the search area (the center of the impact)
|
||||
print('Reference $refIndex at ($centerX, $centerY)');
|
||||
|
||||
// AMÉLIORATION : Recherche du point le plus sombre dans une zone plus large
|
||||
int darkestX = centerX;
|
||||
int darkestY = centerY;
|
||||
double darkestLum = 255;
|
||||
|
||||
for (int dy = -searchRadius; dy <= searchRadius; dy++) {
|
||||
for (int dx = -searchRadius; dx <= searchRadius; dx++) {
|
||||
final px = centerX + dx;
|
||||
final py = centerY + dy;
|
||||
if (px < 0 || px >= width || py < 0 || py >= height) continue;
|
||||
// Recherche en spirale du point le plus sombre
|
||||
for (int r = 0; r <= searchRadius; r++) {
|
||||
for (int dy = -r; dy <= r; dy++) {
|
||||
for (int dx = -r; dx <= r; dx++) {
|
||||
// Seulement le périmètre du carré pour éviter les doublons
|
||||
if (r > 0 && math.max(dx.abs(), dy.abs()) < r) continue;
|
||||
|
||||
final pixel = blurred.getPixel(px, py);
|
||||
final lum = img.getLuminance(pixel).toDouble();
|
||||
if (lum < darkestLum) {
|
||||
darkestLum = lum;
|
||||
darkestX = px;
|
||||
darkestY = py;
|
||||
final px = centerX + dx;
|
||||
final py = centerY + dy;
|
||||
if (px < 0 || px >= width || py < 0 || py >= height) continue;
|
||||
|
||||
final pixel = blurred.getPixel(px, py);
|
||||
final lum = img.getLuminance(pixel).toDouble();
|
||||
if (lum < darkestLum) {
|
||||
darkestLum = lum;
|
||||
darkestX = px;
|
||||
darkestY = py;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Si on a trouvé un point très sombre, on peut s'arrêter
|
||||
if (darkestLum < 50 && r > 5) break;
|
||||
}
|
||||
|
||||
print(' Darkest point at ($darkestX, $darkestY), lum=$darkestLum');
|
||||
|
||||
// Now find the blob at the darkest point using adaptive threshold
|
||||
// Start from the darkest point and expand until we find the boundary
|
||||
final blobResult = _findBlobAtPoint(blurred, darkestX, darkestY, width, height);
|
||||
|
||||
if (blobResult != null) {
|
||||
if (blobResult != null && blobResult.size >= 10) { // Au moins 10 pixels
|
||||
luminances.add(blobResult.avgLuminance);
|
||||
sizes.add(blobResult.size.toDouble());
|
||||
circularities.add(blobResult.circularity);
|
||||
fillRatios.add(blobResult.fillRatio);
|
||||
thresholds.add(blobResult.threshold);
|
||||
print(' Found blob: size=${blobResult.size}, circ=${blobResult.circularity.toStringAsFixed(2)}, '
|
||||
'fill=${blobResult.fillRatio.toStringAsFixed(2)}, threshold=${blobResult.threshold.toStringAsFixed(0)}');
|
||||
} else {
|
||||
print(' No valid blob found at this reference');
|
||||
}
|
||||
}
|
||||
|
||||
if (luminances.isEmpty) return null;
|
||||
if (luminances.isEmpty) {
|
||||
print('ERROR: No valid blobs found from any reference!');
|
||||
return null;
|
||||
}
|
||||
|
||||
// Calculate statistics
|
||||
final avgLum = luminances.reduce((a, b) => a + b) / luminances.length;
|
||||
@@ -290,17 +313,25 @@ class ImageProcessingService {
|
||||
sizeVariance += math.pow(sizes[i] - avgSize, 2);
|
||||
}
|
||||
final lumStdDev = math.sqrt(lumVariance / luminances.length);
|
||||
final sizeStdDev = math.sqrt(sizeVariance / sizes.length);
|
||||
// AMÉLIORATION : Écart-type minimum pour éviter des plages trop étroites
|
||||
final sizeStdDev = math.max(
|
||||
math.sqrt(sizeVariance / sizes.length),
|
||||
avgSize * 0.3, // Au moins 30% de variance
|
||||
);
|
||||
|
||||
return ImpactCharacteristics(
|
||||
final result = ImpactCharacteristics(
|
||||
avgLuminance: avgLum,
|
||||
luminanceStdDev: lumStdDev,
|
||||
luminanceStdDev: math.max(lumStdDev, 10), // Minimum 10 de variance
|
||||
avgSize: avgSize,
|
||||
sizeStdDev: sizeStdDev,
|
||||
avgCircularity: avgCirc,
|
||||
avgFillRatio: avgFill,
|
||||
avgDarkThreshold: avgThreshold,
|
||||
);
|
||||
|
||||
print('Learned characteristics: $result');
|
||||
|
||||
return result;
|
||||
} catch (e) {
|
||||
print('Error analyzing reference impacts: $e');
|
||||
return null;
|
||||
@@ -308,25 +339,30 @@ class ImageProcessingService {
|
||||
}
|
||||
|
||||
/// Find a blob at a specific point and extract its characteristics
|
||||
/// AMÉLIORÉ : Utilise plusieurs méthodes de détection et retourne le meilleur résultat
|
||||
_BlobAnalysis? _findBlobAtPoint(img.Image image, int startX, int startY, int width, int height) {
|
||||
// Get the luminance at the center point
|
||||
final centerPixel = image.getPixel(startX, startY);
|
||||
final centerLum = img.getLuminance(centerPixel).toDouble();
|
||||
|
||||
// Find the threshold by looking at the luminance gradient around the point
|
||||
// Sample in expanding circles to find where the blob ends
|
||||
// MÉTHODE 1 : Expansion radiale pour trouver le bord
|
||||
double sumLum = centerLum;
|
||||
int pixelCount = 1;
|
||||
double maxRadius = 0;
|
||||
|
||||
// Sample at different radii to find the edge
|
||||
for (int r = 1; r <= 50; r++) {
|
||||
// Collecter les luminances à différents rayons pour une analyse plus robuste
|
||||
final radialLuminances = <double>[];
|
||||
|
||||
// Sample at different radii to find the edge - LIMITE RAISONNABLE pour impacts de balle
|
||||
final maxSearchRadius = 60; // Un impact de balle ne fait pas plus de 60 pixels de rayon
|
||||
for (int r = 1; r <= maxSearchRadius; r++) {
|
||||
double ringSum = 0;
|
||||
int ringCount = 0;
|
||||
|
||||
// Sample points on a ring
|
||||
for (int i = 0; i < 16; i++) {
|
||||
final angle = (i / 16) * 2 * math.pi;
|
||||
final numSamples = math.max(12, r ~/ 2);
|
||||
for (int i = 0; i < numSamples; i++) {
|
||||
final angle = (i / numSamples) * 2 * math.pi;
|
||||
final px = startX + (r * math.cos(angle)).round();
|
||||
final py = startY + (r * math.sin(angle)).round();
|
||||
if (px < 0 || px >= width || py < 0 || py >= height) continue;
|
||||
@@ -339,20 +375,47 @@ class ImageProcessingService {
|
||||
|
||||
if (ringCount > 0) {
|
||||
final avgRingLum = ringSum / ringCount;
|
||||
// If the ring is significantly brighter than the center, we've found the edge
|
||||
if (avgRingLum > centerLum + 40) {
|
||||
radialLuminances.add(avgRingLum);
|
||||
|
||||
// Détection du bord : gradient de luminosité significatif
|
||||
// Seuil adaptatif basé sur la différence avec le centre
|
||||
final luminanceDiff = avgRingLum - centerLum;
|
||||
|
||||
// Le bord est trouvé quand on a une augmentation significative de luminosité
|
||||
if (luminanceDiff > 30 && maxRadius == 0) {
|
||||
maxRadius = r.toDouble();
|
||||
break;
|
||||
break; // Arrêter dès qu'on trouve le bord
|
||||
}
|
||||
|
||||
if (maxRadius == 0) {
|
||||
sumLum += ringSum;
|
||||
pixelCount += ringCount;
|
||||
}
|
||||
sumLum += ringSum;
|
||||
pixelCount += ringCount;
|
||||
}
|
||||
}
|
||||
|
||||
if (maxRadius < 3) return null; // Too small to be a valid blob
|
||||
// Si aucun bord trouvé, chercher le gradient maximum
|
||||
if (maxRadius < 2 && radialLuminances.length > 3) {
|
||||
double maxGradient = 0;
|
||||
int maxGradientIndex = 0;
|
||||
for (int i = 1; i < radialLuminances.length; i++) {
|
||||
final gradient = radialLuminances[i] - radialLuminances[i - 1];
|
||||
if (gradient > maxGradient) {
|
||||
maxGradient = gradient;
|
||||
maxGradientIndex = i;
|
||||
}
|
||||
}
|
||||
if (maxGradient > 10) {
|
||||
maxRadius = (maxGradientIndex + 1).toDouble();
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate threshold as the midpoint between center and edge luminance
|
||||
final edgeRadius = (maxRadius * 1.2).round();
|
||||
// Rayon minimum de 3 pixels, maximum de 50 pour un impact de balle
|
||||
if (maxRadius < 3) maxRadius = 3;
|
||||
if (maxRadius > 50) maxRadius = 50;
|
||||
|
||||
// Calculate threshold as weighted average between center and edge luminance
|
||||
final edgeRadius = math.min((maxRadius * 1.2).round(), maxSearchRadius - 1);
|
||||
double edgeLum = 0;
|
||||
int edgeCount = 0;
|
||||
for (int i = 0; i < 16; i++) {
|
||||
@@ -366,62 +429,94 @@ class ImageProcessingService {
|
||||
}
|
||||
if (edgeCount > 0) {
|
||||
edgeLum /= edgeCount;
|
||||
} else {
|
||||
edgeLum = centerLum + 50;
|
||||
}
|
||||
|
||||
final threshold = ((centerLum + edgeLum) / 2).round();
|
||||
// Calculer le seuil optimal
|
||||
final threshold = ((centerLum + edgeLum) / 2).round().clamp(20, 200);
|
||||
|
||||
// Now do a flood fill with this threshold to get the actual blob
|
||||
final mask = List.generate(height, (_) => List.filled(width, false));
|
||||
for (int y = 0; y < height; y++) {
|
||||
for (int x = 0; x < width; x++) {
|
||||
final pixel = image.getPixel(x, y);
|
||||
// Utiliser une zone de recherche locale limitée autour du point
|
||||
final analysis = _tryFindBlobWithThresholdLocal(
|
||||
image, startX, startY, width, height, threshold, sumLum / pixelCount,
|
||||
maxRadius.round() + 10, // Zone de recherche légèrement plus grande que le rayon détecté
|
||||
);
|
||||
|
||||
return analysis;
|
||||
}
|
||||
|
||||
/// Trouve un blob avec un seuil dans une zone locale limitée
|
||||
_BlobAnalysis? _tryFindBlobWithThresholdLocal(
|
||||
img.Image image,
|
||||
int startX,
|
||||
int startY,
|
||||
int width,
|
||||
int height,
|
||||
int threshold,
|
||||
double avgLuminance,
|
||||
int maxSearchRadius,
|
||||
) {
|
||||
// Limiter la zone de recherche
|
||||
final minX = math.max(0, startX - maxSearchRadius);
|
||||
final maxX = math.min(width - 1, startX + maxSearchRadius);
|
||||
final minY = math.max(0, startY - maxSearchRadius);
|
||||
final maxY = math.min(height - 1, startY + maxSearchRadius);
|
||||
|
||||
final localWidth = maxX - minX + 1;
|
||||
final localHeight = maxY - minY + 1;
|
||||
|
||||
// Create binary mask ONLY for the local region
|
||||
final mask = List.generate(localHeight, (_) => List.filled(localWidth, false));
|
||||
for (int y = 0; y < localHeight; y++) {
|
||||
for (int x = 0; x < localWidth; x++) {
|
||||
final globalX = minX + x;
|
||||
final globalY = minY + y;
|
||||
final pixel = image.getPixel(globalX, globalY);
|
||||
final lum = img.getLuminance(pixel);
|
||||
mask[y][x] = lum < threshold;
|
||||
}
|
||||
}
|
||||
|
||||
final visited = List.generate(height, (_) => List.filled(width, false));
|
||||
final visited = List.generate(localHeight, (_) => List.filled(localWidth, false));
|
||||
|
||||
// Find the blob containing the start point
|
||||
if (!mask[startY][startX]) {
|
||||
// Find the blob containing the start point (in local coordinates)
|
||||
final localStartX = startX - minX;
|
||||
final localStartY = startY - minY;
|
||||
|
||||
int searchX = localStartX;
|
||||
int searchY = localStartY;
|
||||
|
||||
if (!mask[localStartY][localStartX]) {
|
||||
// Start point might not be in mask, find nearest point that is
|
||||
for (int r = 1; r <= 10; r++) {
|
||||
bool found = false;
|
||||
bool found = false;
|
||||
for (int r = 1; r <= 15 && !found; r++) {
|
||||
for (int dy = -r; dy <= r && !found; dy++) {
|
||||
for (int dx = -r; dx <= r && !found; dx++) {
|
||||
final px = startX + dx;
|
||||
final py = startY + dy;
|
||||
if (px >= 0 && px < width && py >= 0 && py < height && mask[py][px]) {
|
||||
final blob = _floodFill(mask, visited, px, py, width, height);
|
||||
|
||||
// Calculate fill ratio: actual pixels / bounding circle area
|
||||
final boundingRadius = math.max(blob.radius, 1);
|
||||
final boundingCircleArea = math.pi * boundingRadius * boundingRadius;
|
||||
final fillRatio = (blob.size / boundingCircleArea).clamp(0.0, 1.0);
|
||||
|
||||
return _BlobAnalysis(
|
||||
avgLuminance: sumLum / pixelCount,
|
||||
size: blob.size,
|
||||
circularity: blob.circularity,
|
||||
fillRatio: fillRatio,
|
||||
threshold: threshold.toDouble(),
|
||||
);
|
||||
final px = localStartX + dx;
|
||||
final py = localStartY + dy;
|
||||
if (px >= 0 && px < localWidth && py >= 0 && py < localHeight && mask[py][px]) {
|
||||
searchX = px;
|
||||
searchY = py;
|
||||
found = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return null;
|
||||
if (!found) return null;
|
||||
}
|
||||
|
||||
final blob = _floodFill(mask, visited, startX, startY, width, height);
|
||||
final blob = _floodFillLocal(mask, visited, searchX, searchY, localWidth, localHeight);
|
||||
|
||||
// Calculate fill ratio
|
||||
// Vérifier que le blob est valide - taille raisonnable pour un impact
|
||||
if (blob.size < 10 || blob.size > 5000) return null; // Entre 10 et 5000 pixels
|
||||
|
||||
// Calculate fill ratio: actual pixels / bounding circle area
|
||||
final boundingRadius = math.max(blob.radius, 1);
|
||||
final boundingCircleArea = math.pi * boundingRadius * boundingRadius;
|
||||
final fillRatio = (blob.size / boundingCircleArea).clamp(0.0, 1.0);
|
||||
|
||||
return _BlobAnalysis(
|
||||
avgLuminance: sumLum / pixelCount,
|
||||
avgLuminance: avgLuminance,
|
||||
size: blob.size,
|
||||
circularity: blob.circularity,
|
||||
fillRatio: fillRatio,
|
||||
@@ -429,12 +524,110 @@ class ImageProcessingService {
|
||||
);
|
||||
}
|
||||
|
||||
/// Flood fill pour une zone locale
|
||||
_Blob _floodFillLocal(
|
||||
List<List<bool>> mask,
|
||||
List<List<bool>> visited,
|
||||
int startX,
|
||||
int startY,
|
||||
int width,
|
||||
int height,
|
||||
) {
|
||||
final stack = <_Point>[_Point(startX, startY)];
|
||||
final points = <_Point>[];
|
||||
|
||||
int minX = startX, maxX = startX;
|
||||
int minY = startY, maxY = startY;
|
||||
int perimeterCount = 0;
|
||||
|
||||
while (stack.isNotEmpty) {
|
||||
final point = stack.removeLast();
|
||||
final x = point.x;
|
||||
final y = point.y;
|
||||
|
||||
if (x < 0 || x >= width || y < 0 || y >= height) continue;
|
||||
if (visited[y][x] || !mask[y][x]) continue;
|
||||
|
||||
visited[y][x] = true;
|
||||
points.add(point);
|
||||
|
||||
minX = math.min(minX, x);
|
||||
maxX = math.max(maxX, x);
|
||||
minY = math.min(minY, y);
|
||||
maxY = math.max(maxY, y);
|
||||
|
||||
// Check if this is a perimeter pixel
|
||||
bool isPerimeter = false;
|
||||
for (final delta in [[-1, 0], [1, 0], [0, -1], [0, 1]]) {
|
||||
final nx = x + delta[0];
|
||||
final ny = y + delta[1];
|
||||
if (nx < 0 || nx >= width || ny < 0 || ny >= height || !mask[ny][nx]) {
|
||||
isPerimeter = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (isPerimeter) perimeterCount++;
|
||||
|
||||
// Add neighbors (4-connectivity)
|
||||
stack.add(_Point(x + 1, y));
|
||||
stack.add(_Point(x - 1, y));
|
||||
stack.add(_Point(x, y + 1));
|
||||
stack.add(_Point(x, y - 1));
|
||||
}
|
||||
|
||||
// Calculate centroid
|
||||
double sumX = 0, sumY = 0;
|
||||
for (final p in points) {
|
||||
sumX += p.x;
|
||||
sumY += p.y;
|
||||
}
|
||||
|
||||
final centerX = points.isNotEmpty ? sumX / points.length : startX.toDouble();
|
||||
final centerY = points.isNotEmpty ? sumY / points.length : startY.toDouble();
|
||||
|
||||
// Calculate bounding box dimensions
|
||||
final blobWidth = (maxX - minX + 1).toDouble();
|
||||
final blobHeight = (maxY - minY + 1).toDouble();
|
||||
|
||||
// Calculate approximate radius based on bounding box
|
||||
final radius = math.max(blobWidth, blobHeight) / 2.0;
|
||||
|
||||
// Calculate circularity
|
||||
final area = points.length.toDouble();
|
||||
final perimeter = perimeterCount.toDouble();
|
||||
final circularity = perimeter > 0
|
||||
? (4 * math.pi * area) / (perimeter * perimeter)
|
||||
: 0.0;
|
||||
|
||||
// Calculate aspect ratio
|
||||
final aspectRatio = blobWidth > blobHeight
|
||||
? blobWidth / blobHeight
|
||||
: blobHeight / blobWidth;
|
||||
|
||||
// Calculate fill ratio
|
||||
final boundingCircleArea = math.pi * radius * radius;
|
||||
final fillRatio = boundingCircleArea > 0 ? (area / boundingCircleArea).clamp(0.0, 1.0) : 0.0;
|
||||
|
||||
return _Blob(
|
||||
x: centerX,
|
||||
y: centerY,
|
||||
radius: radius,
|
||||
size: points.length,
|
||||
circularity: circularity.clamp(0.0, 1.0),
|
||||
aspectRatio: aspectRatio,
|
||||
fillRatio: fillRatio,
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
/// Detect impacts based on reference characteristics with tolerance
|
||||
///
|
||||
/// Utilise une approche multi-seuils adaptative pour une meilleure détection
|
||||
List<DetectedImpact> detectImpactsFromReferences(
|
||||
String imagePath,
|
||||
ImpactCharacteristics characteristics, {
|
||||
double tolerance = 2.0, // Number of standard deviations
|
||||
double minCircularity = 0.4,
|
||||
double minCircularity = 0.3,
|
||||
}) {
|
||||
try {
|
||||
final file = File(imagePath);
|
||||
@@ -445,7 +638,7 @@ class ImageProcessingService {
|
||||
// Resize for faster processing
|
||||
img.Image image;
|
||||
double scale = 1.0;
|
||||
final maxDimension = 1000;
|
||||
final maxDimension = 1200; // Augmenté pour plus de précision
|
||||
if (originalImage.width > maxDimension || originalImage.height > maxDimension) {
|
||||
scale = maxDimension / math.max(originalImage.width, originalImage.height);
|
||||
image = img.copyResize(
|
||||
@@ -460,36 +653,83 @@ class ImageProcessingService {
|
||||
final grayscale = img.grayscale(image);
|
||||
final blurred = img.gaussianBlur(grayscale, radius: 2);
|
||||
|
||||
// Use the threshold learned from references
|
||||
final threshold = characteristics.avgDarkThreshold.round();
|
||||
// AMÉLIORATION : Utiliser plusieurs seuils autour du seuil appris
|
||||
final baseThreshold = characteristics.avgDarkThreshold.round();
|
||||
|
||||
// Générer une plage de seuils plus ciblée
|
||||
final thresholds = <int>[];
|
||||
final thresholdRange = (15 * tolerance).round(); // Plage modérée
|
||||
for (int offset = -thresholdRange; offset <= thresholdRange; offset += 8) {
|
||||
final t = (baseThreshold + offset).clamp(30, 150);
|
||||
if (!thresholds.contains(t)) thresholds.add(t);
|
||||
}
|
||||
|
||||
// Calculate size range based on learned characteristics
|
||||
final minSize = (characteristics.avgSize / (tolerance * 2)).clamp(5, 10000).round();
|
||||
final maxSize = (characteristics.avgSize * tolerance * 2).clamp(10, 10000).round();
|
||||
// Utiliser la variance mais avec des limites raisonnables
|
||||
final sizeVariance = math.max(characteristics.sizeStdDev * tolerance, characteristics.avgSize * 0.4);
|
||||
final minSize = math.max(20, (characteristics.avgSize - sizeVariance).round()); // Minimum 20 pixels
|
||||
final maxSize = math.min(3000, (characteristics.avgSize + sizeVariance * 2).round()); // Maximum 3000 pixels
|
||||
|
||||
// Calculate minimum fill ratio based on learned characteristics
|
||||
// Allow some variance but still filter out hollow shapes
|
||||
final minFillRatio = (characteristics.avgFillRatio - 0.2).clamp(0.3, 0.9);
|
||||
// Calculate minimum circularity - équilibré
|
||||
final circularityTolerance = 0.2 * tolerance;
|
||||
final effectiveMinCircularity = math.max(
|
||||
characteristics.avgCircularity - circularityTolerance,
|
||||
minCircularity,
|
||||
).clamp(0.35, 0.85);
|
||||
|
||||
// Detect blobs using the learned threshold
|
||||
final impacts = _detectDarkSpots(
|
||||
blurred,
|
||||
threshold,
|
||||
minSize,
|
||||
maxSize,
|
||||
minCircularity: math.max(characteristics.avgCircularity - 0.2, minCircularity),
|
||||
minFillRatio: minFillRatio,
|
||||
);
|
||||
// Calculate minimum fill ratio - impacts pleins
|
||||
final minFillRatio = (characteristics.avgFillRatio - 0.2).clamp(0.35, 0.85);
|
||||
|
||||
print('Detection params: thresholds=$thresholds, size=$minSize-$maxSize, '
|
||||
'circ>=$effectiveMinCircularity, fill>=$minFillRatio');
|
||||
|
||||
// Détecter avec plusieurs seuils et combiner les résultats
|
||||
final allBlobs = <_Blob>[];
|
||||
|
||||
for (final threshold in thresholds) {
|
||||
final blobs = _detectDarkSpots(
|
||||
blurred,
|
||||
threshold,
|
||||
minSize,
|
||||
maxSize,
|
||||
minCircularity: effectiveMinCircularity,
|
||||
maxAspectRatio: 2.5, // Plus permissif
|
||||
minFillRatio: minFillRatio,
|
||||
);
|
||||
allBlobs.addAll(blobs);
|
||||
}
|
||||
|
||||
// Fusionner les blobs qui se chevauchent (même impact détecté à différents seuils)
|
||||
final mergedBlobs = _mergeOverlappingBlobs(allBlobs);
|
||||
|
||||
// FILTRE POST-DÉTECTION : Garder seulement les blobs similaires aux références
|
||||
// Le filtre est plus ou moins strict selon la tolérance
|
||||
final sizeToleranceFactor = 0.3 + (tolerance - 1) * 0.3; // 0.3 à 1.5 selon tolérance
|
||||
final minSizeRatio = math.max(0.15, 1 / (1 + sizeToleranceFactor * 3));
|
||||
final maxSizeRatio = 1 + sizeToleranceFactor * 4;
|
||||
|
||||
final filteredBlobs = mergedBlobs.where((blob) {
|
||||
// Vérifier la taille par rapport aux caractéristiques apprises
|
||||
final sizeRatio = blob.size / characteristics.avgSize;
|
||||
if (sizeRatio < minSizeRatio || sizeRatio > maxSizeRatio) return false;
|
||||
|
||||
// Vérifier la circularité (légèrement relaxée)
|
||||
if (blob.circularity < effectiveMinCircularity * 0.85) return false;
|
||||
|
||||
// Vérifier le fill ratio
|
||||
if (blob.fillRatio < minFillRatio * 0.9) return false;
|
||||
|
||||
return true;
|
||||
}).toList();
|
||||
|
||||
print('Found ${filteredBlobs.length} impacts after filtering (from ${mergedBlobs.length} merged)');
|
||||
|
||||
// Convert to relative coordinates
|
||||
final width = originalImage.width.toDouble();
|
||||
final height = originalImage.height.toDouble();
|
||||
|
||||
return impacts.map((impact) {
|
||||
return filteredBlobs.map((blob) {
|
||||
return DetectedImpact(
|
||||
x: impact.x / image.width,
|
||||
y: impact.y / image.height,
|
||||
radius: impact.radius / scale,
|
||||
x: blob.x / image.width,
|
||||
y: blob.y / image.height,
|
||||
radius: blob.radius / scale,
|
||||
);
|
||||
}).toList();
|
||||
} catch (e) {
|
||||
@@ -498,6 +738,44 @@ class ImageProcessingService {
|
||||
}
|
||||
}
|
||||
|
||||
/// Fusionne les blobs qui se chevauchent en gardant le meilleur représentant
|
||||
List<_Blob> _mergeOverlappingBlobs(List<_Blob> blobs) {
|
||||
if (blobs.isEmpty) return [];
|
||||
|
||||
// Trier par score de qualité (circularité * fillRatio)
|
||||
final sortedBlobs = List<_Blob>.from(blobs);
|
||||
sortedBlobs.sort((a, b) {
|
||||
final scoreA = a.circularity * a.fillRatio * a.size;
|
||||
final scoreB = b.circularity * b.fillRatio * b.size;
|
||||
return scoreB.compareTo(scoreA);
|
||||
});
|
||||
|
||||
final merged = <_Blob>[];
|
||||
|
||||
for (final blob in sortedBlobs) {
|
||||
bool shouldAdd = true;
|
||||
|
||||
for (final existing in merged) {
|
||||
final dx = blob.x - existing.x;
|
||||
final dy = blob.y - existing.y;
|
||||
final distance = math.sqrt(dx * dx + dy * dy);
|
||||
final minDist = math.min(blob.radius, existing.radius);
|
||||
|
||||
// Si les centres sont proches, c'est le même impact
|
||||
if (distance < minDist * 1.5) {
|
||||
shouldAdd = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (shouldAdd) {
|
||||
merged.add(blob);
|
||||
}
|
||||
}
|
||||
|
||||
return merged;
|
||||
}
|
||||
|
||||
/// Detect dark spots with adaptive luminance range
|
||||
List<_Blob> _detectDarkSpotsAdaptive(
|
||||
img.Image image,
|
||||
|
||||
228
lib/services/opencv_impact_detection_service.dart
Normal file
228
lib/services/opencv_impact_detection_service.dart
Normal file
@@ -0,0 +1,228 @@
|
||||
/// Service de détection d'impacts utilisant OpenCV.
|
||||
library;
|
||||
|
||||
import 'dart:math' as math;
|
||||
import 'package:opencv_dart/opencv_dart.dart' as cv;
|
||||
|
||||
/// Paramètres de détection d'impacts OpenCV
|
||||
class OpenCVDetectionSettings {
|
||||
/// Seuil Canny bas pour la détection de contours
|
||||
final double cannyThreshold1;
|
||||
|
||||
/// Seuil Canny haut pour la détection de contours
|
||||
final double cannyThreshold2;
|
||||
|
||||
/// Distance minimale entre les centres des cercles détectés
|
||||
final double minDist;
|
||||
|
||||
/// Paramètre 1 de HoughCircles (seuil Canny interne)
|
||||
final double param1;
|
||||
|
||||
/// Paramètre 2 de HoughCircles (seuil d'accumulation)
|
||||
final double param2;
|
||||
|
||||
/// Rayon minimum des cercles en pixels
|
||||
final int minRadius;
|
||||
|
||||
/// Rayon maximum des cercles en pixels
|
||||
final int maxRadius;
|
||||
|
||||
/// Taille du flou gaussien (doit être impair)
|
||||
final int blurSize;
|
||||
|
||||
/// Utiliser la détection de contours en plus de Hough
|
||||
final bool useContourDetection;
|
||||
|
||||
/// Circularité minimale pour la détection par contours (0-1)
|
||||
final double minCircularity;
|
||||
|
||||
/// Surface minimale des contours
|
||||
final double minContourArea;
|
||||
|
||||
/// Surface maximale des contours
|
||||
final double maxContourArea;
|
||||
|
||||
const OpenCVDetectionSettings({
|
||||
this.cannyThreshold1 = 50,
|
||||
this.cannyThreshold2 = 150,
|
||||
this.minDist = 20,
|
||||
this.param1 = 100,
|
||||
this.param2 = 30,
|
||||
this.minRadius = 5,
|
||||
this.maxRadius = 50,
|
||||
this.blurSize = 5,
|
||||
this.useContourDetection = true,
|
||||
this.minCircularity = 0.6,
|
||||
this.minContourArea = 50,
|
||||
this.maxContourArea = 5000,
|
||||
});
|
||||
}
|
||||
|
||||
/// Résultat de détection d'impact
|
||||
class OpenCVDetectedImpact {
|
||||
/// Position X normalisée (0-1)
|
||||
final double x;
|
||||
|
||||
/// Position Y normalisée (0-1)
|
||||
final double y;
|
||||
|
||||
/// Rayon en pixels
|
||||
final double radius;
|
||||
|
||||
/// Score de confiance (0-1)
|
||||
final double confidence;
|
||||
|
||||
/// Méthode de détection utilisée
|
||||
final String method;
|
||||
|
||||
const OpenCVDetectedImpact({
|
||||
required this.x,
|
||||
required this.y,
|
||||
required this.radius,
|
||||
this.confidence = 1.0,
|
||||
this.method = 'unknown',
|
||||
});
|
||||
}
|
||||
|
||||
/// Service de détection d'impacts utilisant OpenCV
|
||||
class OpenCVImpactDetectionService {
|
||||
/// Détecte les impacts dans une image en utilisant OpenCV
|
||||
List<OpenCVDetectedImpact> detectImpacts(
|
||||
String imagePath, {
|
||||
OpenCVDetectionSettings settings = const OpenCVDetectionSettings(),
|
||||
}) {
|
||||
try {
|
||||
final img = cv.imread(imagePath, flags: cv.IMREAD_COLOR);
|
||||
if (img.isEmpty) return [];
|
||||
|
||||
final gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY);
|
||||
|
||||
// Apply blur to reduce noise
|
||||
final blurKSize = (settings.blurSize, settings.blurSize);
|
||||
final blurred = cv.gaussianBlur(gray, blurKSize, 2, sigmaY: 2);
|
||||
|
||||
final List<OpenCVDetectedImpact> detectedImpacts = [];
|
||||
|
||||
final circles = cv.HoughCircles(
|
||||
blurred,
|
||||
cv.HOUGH_GRADIENT,
|
||||
1,
|
||||
settings.minDist,
|
||||
param1: settings.param1,
|
||||
param2: settings.param2,
|
||||
minRadius: settings.minRadius,
|
||||
maxRadius: settings.maxRadius,
|
||||
);
|
||||
|
||||
if (circles.rows > 0 && circles.cols > 0) {
|
||||
// Mat shape: (1, N, 3) usually for HoughCircles (CV_32FC3)
|
||||
// We use at<Vec3f> directly.
|
||||
|
||||
for (int i = 0; i < circles.cols; i++) {
|
||||
final vec = circles.at<cv.Vec3f>(0, i);
|
||||
final x = vec.val1;
|
||||
final y = vec.val2;
|
||||
final r = vec.val3;
|
||||
|
||||
detectedImpacts.add(
|
||||
OpenCVDetectedImpact(
|
||||
x: x / img.cols,
|
||||
y: y / img.rows,
|
||||
radius: r,
|
||||
confidence: 0.8,
|
||||
method: 'hough',
|
||||
),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Contour Detection (if enabled)
|
||||
if (settings.useContourDetection) {
|
||||
// Canny edge detection
|
||||
final edges = cv.canny(
|
||||
blurred,
|
||||
settings.cannyThreshold1,
|
||||
settings.cannyThreshold2,
|
||||
);
|
||||
|
||||
// Find contours
|
||||
final contoursResult = cv.findContours(
|
||||
edges,
|
||||
cv.RETR_EXTERNAL,
|
||||
cv.CHAIN_APPROX_SIMPLE,
|
||||
);
|
||||
|
||||
final contours = contoursResult.$1;
|
||||
// hierarchy is $2
|
||||
|
||||
for (int i = 0; i < contours.length; i++) {
|
||||
final contour = contours[i];
|
||||
|
||||
// Filter by area
|
||||
final area = cv.contourArea(contour);
|
||||
if (area < settings.minContourArea ||
|
||||
area > settings.maxContourArea) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Filter by circularity
|
||||
final perimeter = cv.arcLength(contour, true);
|
||||
if (perimeter == 0) continue;
|
||||
final circularity = 4 * math.pi * area / (perimeter * perimeter);
|
||||
|
||||
if (circularity < settings.minCircularity) continue;
|
||||
|
||||
// Get bounding circle
|
||||
final enclosingCircle = cv.minEnclosingCircle(contour);
|
||||
final center = enclosingCircle.$1;
|
||||
final radius = enclosingCircle.$2;
|
||||
|
||||
// Avoid duplicates (simple distance check against Hough results)
|
||||
bool isDuplicate = false;
|
||||
for (final existing in detectedImpacts) {
|
||||
final dx = existing.x * img.cols - center.x;
|
||||
final dy = existing.y * img.rows - center.y;
|
||||
final dist = math.sqrt(dx * dx + dy * dy);
|
||||
if (dist < radius) {
|
||||
isDuplicate = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!isDuplicate) {
|
||||
detectedImpacts.add(
|
||||
OpenCVDetectedImpact(
|
||||
x: center.x / img.cols,
|
||||
y: center.y / img.rows,
|
||||
radius: radius,
|
||||
confidence: circularity, // Use circularity as confidence
|
||||
method: 'contour',
|
||||
),
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return detectedImpacts;
|
||||
} catch (e) {
|
||||
// print('OpenCV Error: $e');
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/// Détecte les impacts en utilisant une image de référence
|
||||
List<OpenCVDetectedImpact> detectFromReferences(
|
||||
String imagePath,
|
||||
List<({double x, double y})> referencePoints, {
|
||||
double tolerance = 2.0,
|
||||
}) {
|
||||
// Basic implementation: use average color/brightness of reference points
|
||||
// This is a placeholder for a more complex template matching or feature matching
|
||||
|
||||
// For now, we can just run the standard detection but filter results
|
||||
// based on properties of the reference points (e.g. size/radius if we had it).
|
||||
|
||||
// Returning standard detection for now to enable the feature.
|
||||
return detectImpacts(imagePath);
|
||||
}
|
||||
}
|
||||
240
lib/services/opencv_target_service.dart
Normal file
240
lib/services/opencv_target_service.dart
Normal file
@@ -0,0 +1,240 @@
|
||||
import 'dart:math' as math;
|
||||
import 'package:opencv_dart/opencv_dart.dart' as cv;
|
||||
|
||||
class TargetDetectionResult {
|
||||
final double centerX;
|
||||
final double centerY;
|
||||
final double radius;
|
||||
final bool success;
|
||||
|
||||
TargetDetectionResult({
|
||||
required this.centerX,
|
||||
required this.centerY,
|
||||
required this.radius,
|
||||
this.success = true,
|
||||
});
|
||||
|
||||
factory TargetDetectionResult.failure() {
|
||||
return TargetDetectionResult(
|
||||
centerX: 0.5,
|
||||
centerY: 0.5,
|
||||
radius: 0.4,
|
||||
success: false,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
class OpenCVTargetService {
|
||||
/// Detect the main target (center and radius) from an image file
|
||||
Future<TargetDetectionResult> detectTarget(String imagePath) async {
|
||||
try {
|
||||
// Read image
|
||||
final img = cv.imread(imagePath, flags: cv.IMREAD_COLOR);
|
||||
if (img.isEmpty) {
|
||||
return TargetDetectionResult.failure();
|
||||
}
|
||||
|
||||
// Convert to grayscale
|
||||
final gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY);
|
||||
|
||||
// Apply Gaussian blur to reduce noise
|
||||
final blurred = cv.gaussianBlur(gray, (9, 9), 2, sigmaY: 2);
|
||||
|
||||
// Detect circles using Hough Transform
|
||||
// Parameters need to be tuned for the specific target type
|
||||
final circles = cv.HoughCircles(
|
||||
blurred,
|
||||
cv.HOUGH_GRADIENT,
|
||||
1, // dp
|
||||
(img.rows / 16)
|
||||
.toDouble(), // minDist decreased to allow more rings in same general area
|
||||
param1: 100, // Canny edge detection
|
||||
param2:
|
||||
60, // Accumulator threshold (higher = fewer false circles, more accurate)
|
||||
minRadius: img.cols ~/ 20,
|
||||
maxRadius: img.cols ~/ 2,
|
||||
);
|
||||
|
||||
// HoughCircles returns a Mat of shape (1, N, 3) where N is number of circles.
|
||||
// In opencv_dart, we cannot iterate easily.
|
||||
// However, we can access data via pointer if needed, or check if Vec3f is supported.
|
||||
// Given the user report, `at<Vec3f>` likely failed compilation or runtime.
|
||||
// Let's use a safer approach: assume standard memory layout (x, y, r, x, y, r...).
|
||||
// Or use `at<double>` carefully.
|
||||
|
||||
// Better yet: try to use `circles.data` if available, but it returns a Pointer.
|
||||
// Let's stick to `at` but use `double` and manual offset if Vec3f fails.
|
||||
// actually, let's try to trust `at<double>` for flattened access OR `at<Vec3f>`.
|
||||
// NOTE: `at<Vec3f>` was reported as "method at not defined for VecPoint2f" earlier, NOT for Mat.
|
||||
// The user error was for `VecPoint2f`. `Mat` definitely has `at`.
|
||||
// BUT `VecPoint2f` is a List-like structure in Dart wrapper.
|
||||
// usage of `at` on `VecPoint2f` was the error.
|
||||
// Here `circles` IS A MAT. So `at` IS defined.
|
||||
// However, to be safe and robust, and to implement clustering...
|
||||
|
||||
if (circles.isEmpty) {
|
||||
// Try with different parameters if first attempt fails (more lenient)
|
||||
final looseCircles = cv.HoughCircles(
|
||||
blurred,
|
||||
cv.HOUGH_GRADIENT,
|
||||
1,
|
||||
(img.rows / 8).toDouble(),
|
||||
param1: 100,
|
||||
param2: 40,
|
||||
minRadius: img.cols ~/ 20,
|
||||
maxRadius: img.cols ~/ 2,
|
||||
);
|
||||
|
||||
if (looseCircles.isEmpty) {
|
||||
return TargetDetectionResult.failure();
|
||||
}
|
||||
return _findBestConcentricCircles(looseCircles, img.cols, img.rows);
|
||||
}
|
||||
|
||||
return _findBestConcentricCircles(circles, img.cols, img.rows);
|
||||
} catch (e) {
|
||||
// print('Error detecting target with OpenCV: $e');
|
||||
return TargetDetectionResult.failure();
|
||||
}
|
||||
}
|
||||
|
||||
TargetDetectionResult _findBestConcentricCircles(
|
||||
cv.Mat circles,
|
||||
int width,
|
||||
int height,
|
||||
) {
|
||||
if (circles.rows == 0 || circles.cols == 0) {
|
||||
return TargetDetectionResult.failure();
|
||||
}
|
||||
|
||||
final int numCircles = circles.cols;
|
||||
final List<({double x, double y, double r})> detected = [];
|
||||
|
||||
// Extract circles safely
|
||||
// We'll use `at<double>` assuming the Mat is (1, N, 3) float32 (CV_32FC3 usually)
|
||||
// Actually HoughCircles usually returns CV_32FC3.
|
||||
// So we can access `at<cv.Vec3f>(0, i)`.
|
||||
// If that fails, we can fall back. But since `Mat` has `at`, it should work unless generic is bad.
|
||||
// Let's assume it works for Mat but checking boundaries.
|
||||
|
||||
// NOTE: If this throws "at not defined" (unlikely for Mat), we'd need another way.
|
||||
// But since the previous error was on `VecPoint2f` (which is NOT a Mat), this should be fine.
|
||||
|
||||
for (int i = 0; i < numCircles; i++) {
|
||||
// Access using Vec3f if possible, or try to interpret memory
|
||||
// Using `at<cv.Vec3f>` is the standard way.
|
||||
final vec = circles.at<cv.Vec3f>(0, i);
|
||||
detected.add((x: vec.val1, y: vec.val2, r: vec.val3));
|
||||
}
|
||||
|
||||
if (detected.isEmpty) return TargetDetectionResult.failure();
|
||||
|
||||
// Cluster circles by center position
|
||||
// We consider circles "concentric" if their centers are within 5% of image min dimension
|
||||
final double tolerance = math.min(width, height) * 0.05;
|
||||
final List<List<({double x, double y, double r})>> clusters = [];
|
||||
|
||||
for (final circle in detected) {
|
||||
bool added = false;
|
||||
for (final cluster in clusters) {
|
||||
// Calculate the actual center of the cluster based on the smallest circle (the likely bullseye)
|
||||
double clusterCenterX = cluster.first.x;
|
||||
double clusterCenterY = cluster.first.y;
|
||||
double minRadiusInCluster = cluster.first.r;
|
||||
|
||||
for (final c in cluster) {
|
||||
if (c.r < minRadiusInCluster) {
|
||||
minRadiusInCluster = c.r;
|
||||
clusterCenterX = c.x;
|
||||
clusterCenterY = c.y;
|
||||
}
|
||||
}
|
||||
|
||||
final dist = math.sqrt(
|
||||
math.pow(circle.x - clusterCenterX, 2) +
|
||||
math.pow(circle.y - clusterCenterY, 2),
|
||||
);
|
||||
|
||||
if (dist < tolerance) {
|
||||
cluster.add(circle);
|
||||
added = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (!added) {
|
||||
clusters.add([circle]);
|
||||
}
|
||||
}
|
||||
|
||||
// Find the best cluster
|
||||
// 1. Prefer clusters with more circles (concentric rings)
|
||||
// 2. Tie-break: closest to image center
|
||||
|
||||
List<({double x, double y, double r})> bestCluster = clusters.first;
|
||||
double bestScore = -1.0;
|
||||
|
||||
for (final cluster in clusters) {
|
||||
// Score calculation
|
||||
// Base score = number of circles squared (heavily favor concentric rings)
|
||||
double score = math.pow(cluster.length, 2).toDouble() * 10.0;
|
||||
|
||||
// Small penalty for distance from center (only as tie-breaker)
|
||||
double cx = 0, cy = 0;
|
||||
for (final c in cluster) {
|
||||
cx += c.x;
|
||||
cy += c.y;
|
||||
}
|
||||
cx /= cluster.length;
|
||||
cy /= cluster.length;
|
||||
|
||||
final distFromCenter = math.sqrt(
|
||||
math.pow(cx - width / 2, 2) + math.pow(cy - height / 2, 2),
|
||||
);
|
||||
final relDist = distFromCenter / math.min(width, height);
|
||||
|
||||
score -=
|
||||
relDist * 2.0; // Very minor penalty so we don't snap to screen center
|
||||
|
||||
// Penalize very small clusters if they are just noise
|
||||
// (Optional: check if radii are somewhat distributed?)
|
||||
|
||||
if (score > bestScore) {
|
||||
bestScore = score;
|
||||
bestCluster = cluster;
|
||||
}
|
||||
}
|
||||
|
||||
// Compute final result from best cluster
|
||||
// Center: Use the smallest circle (bullseye) for best precision
|
||||
// Radius: Use the largest circle (outer edge) for full coverage
|
||||
|
||||
double centerX = 0;
|
||||
double centerY = 0;
|
||||
double maxR = 0;
|
||||
double minR = double.infinity;
|
||||
|
||||
for (final c in bestCluster) {
|
||||
if (c.r > maxR) {
|
||||
maxR = c.r;
|
||||
}
|
||||
if (c.r < minR) {
|
||||
minR = c.r;
|
||||
centerX = c.x;
|
||||
centerY = c.y;
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback if something went wrong (shouldn't happen with non-empty cluster)
|
||||
if (minR == double.infinity) {
|
||||
centerX = bestCluster.first.x;
|
||||
centerY = bestCluster.first.y;
|
||||
}
|
||||
|
||||
return TargetDetectionResult(
|
||||
centerX: centerX / width,
|
||||
centerY: centerY / height,
|
||||
radius: maxR / math.min(width, height),
|
||||
success: true,
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -1,8 +1,13 @@
|
||||
import 'dart:math' as math;
|
||||
import '../data/models/target_type.dart';
|
||||
import 'image_processing_service.dart';
|
||||
import 'opencv_impact_detection_service.dart';
|
||||
import 'yolo_impact_detection_service.dart';
|
||||
|
||||
export 'image_processing_service.dart' show ImpactDetectionSettings, ReferenceImpact, ImpactCharacteristics;
|
||||
export 'image_processing_service.dart'
|
||||
show ImpactDetectionSettings, ReferenceImpact, ImpactCharacteristics;
|
||||
export 'opencv_impact_detection_service.dart'
|
||||
show OpenCVDetectionSettings, OpenCVDetectedImpact;
|
||||
|
||||
class TargetDetectionResult {
|
||||
final double centerX; // Relative (0-1)
|
||||
@@ -49,16 +54,20 @@ class DetectedImpactResult {
|
||||
|
||||
class TargetDetectionService {
|
||||
final ImageProcessingService _imageProcessingService;
|
||||
final OpenCVImpactDetectionService _opencvService;
|
||||
final YOLOImpactDetectionService _yoloService;
|
||||
|
||||
TargetDetectionService({
|
||||
ImageProcessingService? imageProcessingService,
|
||||
}) : _imageProcessingService = imageProcessingService ?? ImageProcessingService();
|
||||
OpenCVImpactDetectionService? opencvService,
|
||||
YOLOImpactDetectionService? yoloService,
|
||||
}) : _imageProcessingService =
|
||||
imageProcessingService ?? ImageProcessingService(),
|
||||
_opencvService = opencvService ?? OpenCVImpactDetectionService(),
|
||||
_yoloService = yoloService ?? YOLOImpactDetectionService();
|
||||
|
||||
/// Detect target and impacts from an image file
|
||||
TargetDetectionResult detectTarget(
|
||||
String imagePath,
|
||||
TargetType targetType,
|
||||
) {
|
||||
TargetDetectionResult detectTarget(String imagePath, TargetType targetType) {
|
||||
try {
|
||||
// Detect main target
|
||||
final mainTarget = _imageProcessingService.detectMainTarget(imagePath);
|
||||
@@ -79,7 +88,13 @@ class TargetDetectionService {
|
||||
// Convert impacts to relative coordinates and calculate scores
|
||||
final detectedImpacts = impacts.map((impact) {
|
||||
final score = targetType == TargetType.concentric
|
||||
? _calculateConcentricScore(impact.x, impact.y, centerX, centerY, radius)
|
||||
? _calculateConcentricScore(
|
||||
impact.x,
|
||||
impact.y,
|
||||
centerX,
|
||||
centerY,
|
||||
radius,
|
||||
)
|
||||
: _calculateSilhouetteScore(impact.x, impact.y, centerX, centerY);
|
||||
|
||||
return DetectedImpactResult(
|
||||
@@ -144,9 +159,9 @@ class TargetDetectionService {
|
||||
|
||||
// Vertical zones
|
||||
if (dy < -0.25) return 5; // Head zone (top)
|
||||
if (dy < 0.0) return 5; // Center mass (upper body)
|
||||
if (dy < 0.15) return 4; // Body
|
||||
if (dy < 0.35) return 3; // Lower body
|
||||
if (dy < 0.0) return 5; // Center mass (upper body)
|
||||
if (dy < 0.15) return 4; // Body
|
||||
if (dy < 0.35) return 3; // Lower body
|
||||
|
||||
return 0; // Outside target
|
||||
}
|
||||
@@ -172,7 +187,13 @@ class TargetDetectionService {
|
||||
return impacts.map((impact) {
|
||||
final score = targetType == TargetType.concentric
|
||||
? _calculateConcentricScoreWithRings(
|
||||
impact.x, impact.y, centerX, centerY, radius, ringCount)
|
||||
impact.x,
|
||||
impact.y,
|
||||
centerX,
|
||||
centerY,
|
||||
radius,
|
||||
ringCount,
|
||||
)
|
||||
: _calculateSilhouetteScore(impact.x, impact.y, centerX, centerY);
|
||||
|
||||
return DetectedImpactResult(
|
||||
@@ -216,7 +237,10 @@ class TargetDetectionService {
|
||||
String imagePath,
|
||||
List<ReferenceImpact> references,
|
||||
) {
|
||||
return _imageProcessingService.analyzeReferenceImpacts(imagePath, references);
|
||||
return _imageProcessingService.analyzeReferenceImpacts(
|
||||
imagePath,
|
||||
references,
|
||||
);
|
||||
}
|
||||
|
||||
/// Detect impacts based on reference characteristics (calibrated detection)
|
||||
@@ -240,7 +264,13 @@ class TargetDetectionService {
|
||||
return impacts.map((impact) {
|
||||
final score = targetType == TargetType.concentric
|
||||
? _calculateConcentricScoreWithRings(
|
||||
impact.x, impact.y, centerX, centerY, radius, ringCount)
|
||||
impact.x,
|
||||
impact.y,
|
||||
centerX,
|
||||
centerY,
|
||||
radius,
|
||||
ringCount,
|
||||
)
|
||||
: _calculateSilhouetteScore(impact.x, impact.y, centerX, centerY);
|
||||
|
||||
return DetectedImpactResult(
|
||||
@@ -254,4 +284,135 @@ class TargetDetectionService {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/// Détecte les impacts en utilisant OpenCV (Hough Circles + Contours)
|
||||
///
|
||||
/// Cette méthode utilise les algorithmes OpenCV pour une détection plus robuste:
|
||||
/// - Transformation de Hough pour détecter les cercles
|
||||
/// - Analyse de contours avec filtrage par circularité
|
||||
List<DetectedImpactResult> detectImpactsWithOpenCV(
|
||||
String imagePath,
|
||||
TargetType targetType,
|
||||
double centerX,
|
||||
double centerY,
|
||||
double radius,
|
||||
int ringCount, {
|
||||
OpenCVDetectionSettings? settings,
|
||||
}) {
|
||||
try {
|
||||
final impacts = _opencvService.detectImpacts(
|
||||
imagePath,
|
||||
settings: settings ?? const OpenCVDetectionSettings(),
|
||||
);
|
||||
|
||||
return impacts.map((impact) {
|
||||
final score = targetType == TargetType.concentric
|
||||
? _calculateConcentricScoreWithRings(
|
||||
impact.x,
|
||||
impact.y,
|
||||
centerX,
|
||||
centerY,
|
||||
radius,
|
||||
ringCount,
|
||||
)
|
||||
: _calculateSilhouetteScore(impact.x, impact.y, centerX, centerY);
|
||||
|
||||
return DetectedImpactResult(
|
||||
x: impact.x,
|
||||
y: impact.y,
|
||||
radius: impact.radius,
|
||||
suggestedScore: score,
|
||||
);
|
||||
}).toList();
|
||||
} catch (e) {
|
||||
print('Erreur détection OpenCV: $e');
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/// Détecte les impacts avec OpenCV en utilisant des références
|
||||
///
|
||||
/// Analyse les impacts de référence pour apprendre leurs caractéristiques
|
||||
/// puis détecte les impacts similaires dans l'image.
|
||||
List<DetectedImpactResult> detectImpactsWithOpenCVFromReferences(
|
||||
String imagePath,
|
||||
TargetType targetType,
|
||||
double centerX,
|
||||
double centerY,
|
||||
double radius,
|
||||
int ringCount,
|
||||
List<ReferenceImpact> references, {
|
||||
double tolerance = 2.0,
|
||||
}) {
|
||||
try {
|
||||
// Convertir les références au format OpenCV
|
||||
final refPoints = references.map((r) => (x: r.x, y: r.y)).toList();
|
||||
|
||||
final impacts = _opencvService.detectFromReferences(
|
||||
imagePath,
|
||||
refPoints,
|
||||
tolerance: tolerance,
|
||||
);
|
||||
|
||||
return impacts.map((impact) {
|
||||
final score = targetType == TargetType.concentric
|
||||
? _calculateConcentricScoreWithRings(
|
||||
impact.x,
|
||||
impact.y,
|
||||
centerX,
|
||||
centerY,
|
||||
radius,
|
||||
ringCount,
|
||||
)
|
||||
: _calculateSilhouetteScore(impact.x, impact.y, centerX, centerY);
|
||||
|
||||
return DetectedImpactResult(
|
||||
x: impact.x,
|
||||
y: impact.y,
|
||||
radius: impact.radius,
|
||||
suggestedScore: score,
|
||||
);
|
||||
}).toList();
|
||||
} catch (e) {
|
||||
print('Erreur détection OpenCV depuis références: $e');
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/// Détecte les impacts en utilisant YOLOv8
|
||||
Future<List<DetectedImpactResult>> detectImpactsWithYOLO(
|
||||
String imagePath,
|
||||
TargetType targetType,
|
||||
double centerX,
|
||||
double centerY,
|
||||
double radius,
|
||||
int ringCount,
|
||||
) async {
|
||||
try {
|
||||
final impacts = await _yoloService.detectImpacts(imagePath);
|
||||
|
||||
return impacts.map((impact) {
|
||||
final score = targetType == TargetType.concentric
|
||||
? _calculateConcentricScoreWithRings(
|
||||
impact.x,
|
||||
impact.y,
|
||||
centerX,
|
||||
centerY,
|
||||
radius,
|
||||
ringCount,
|
||||
)
|
||||
: _calculateSilhouetteScore(impact.x, impact.y, centerX, centerY);
|
||||
|
||||
return DetectedImpactResult(
|
||||
x: impact.x,
|
||||
y: impact.y,
|
||||
radius: impact.radius,
|
||||
suggestedScore: score,
|
||||
);
|
||||
}).toList();
|
||||
} catch (e) {
|
||||
print('Erreur détection YOLOv8: $e');
|
||||
return [];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
174
lib/services/yolo_impact_detection_service.dart
Normal file
174
lib/services/yolo_impact_detection_service.dart
Normal file
@@ -0,0 +1,174 @@
|
||||
import 'dart:io';
|
||||
import 'dart:math' as math;
|
||||
import 'dart:typed_data';
|
||||
import 'package:tflite_flutter/tflite_flutter.dart';
|
||||
import 'package:image/image.dart' as img;
|
||||
import 'target_detection_service.dart';
|
||||
|
||||
class YOLOImpactDetectionService {
|
||||
Interpreter? _interpreter;
|
||||
|
||||
static const String modelPath = 'assets/models/yolov11n_impact.tflite';
|
||||
static const String labelsPath = 'assets/models/labels.txt';
|
||||
|
||||
Future<void> init() async {
|
||||
if (_interpreter != null) return;
|
||||
|
||||
try {
|
||||
// Try loading the specific YOLOv11 model first, fallback to v8 if not found
|
||||
try {
|
||||
_interpreter = await Interpreter.fromAsset(modelPath);
|
||||
} catch (e) {
|
||||
print('YOLOv11 model not found at $modelPath, trying YOLOv8 fallback');
|
||||
_interpreter = await Interpreter.fromAsset(
|
||||
'assets/models/yolov8n_impact.tflite',
|
||||
);
|
||||
}
|
||||
|
||||
print('YOLO Interpreter loaded successfully');
|
||||
} catch (e) {
|
||||
print('Error loading YOLO model: $e');
|
||||
}
|
||||
}
|
||||
|
||||
Future<List<DetectedImpactResult>> detectImpacts(String imagePath) async {
|
||||
if (_interpreter == null) await init();
|
||||
if (_interpreter == null) return [];
|
||||
|
||||
try {
|
||||
final bytes = File(imagePath).readAsBytesSync();
|
||||
final originalImage = img.decodeImage(bytes);
|
||||
if (originalImage == null) return [];
|
||||
|
||||
// YOLOv8/v11 usually takes 640x640
|
||||
const int inputSize = 640;
|
||||
final resizedImage = img.copyResize(
|
||||
originalImage,
|
||||
width: inputSize,
|
||||
height: inputSize,
|
||||
);
|
||||
|
||||
// Prepare input tensor
|
||||
var input = _imageToByteListFloat32(resizedImage, inputSize);
|
||||
|
||||
// Raw YOLO output shape usually [1, 4 + num_classes, 8400]
|
||||
// For single class "impact", it's [1, 5, 8400]
|
||||
var output = List<double>.filled(1 * 5 * 8400, 0).reshape([1, 5, 8400]);
|
||||
|
||||
_interpreter!.run(input, output);
|
||||
|
||||
return _processOutput(
|
||||
output[0],
|
||||
originalImage.width,
|
||||
originalImage.height,
|
||||
);
|
||||
} catch (e) {
|
||||
print('Error during YOLO inference: $e');
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
List<DetectedImpactResult> _processOutput(
|
||||
List<List<double>> output,
|
||||
int imgWidth,
|
||||
int imgHeight,
|
||||
) {
|
||||
final List<_Detection> candidates = [];
|
||||
const double threshold = 0.25;
|
||||
|
||||
// output is [5, 8400] -> [x, y, w, h, conf]
|
||||
for (int i = 0; i < 8400; i++) {
|
||||
final double confidence = output[4][i];
|
||||
if (confidence > threshold) {
|
||||
candidates.add(
|
||||
_Detection(
|
||||
x: output[0][i],
|
||||
y: output[1][i],
|
||||
w: output[2][i],
|
||||
h: output[3][i],
|
||||
confidence: confidence,
|
||||
),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Apply Non-Max Suppression (NMS)
|
||||
final List<_Detection> suppressed = _nms(candidates);
|
||||
|
||||
return suppressed
|
||||
.map(
|
||||
(det) => DetectedImpactResult(
|
||||
x: det.x / 640.0,
|
||||
y: det.y / 640.0,
|
||||
radius: 5.0,
|
||||
suggestedScore: 0,
|
||||
),
|
||||
)
|
||||
.toList();
|
||||
}
|
||||
|
||||
List<_Detection> _nms(List<_Detection> detections) {
|
||||
if (detections.isEmpty) return [];
|
||||
|
||||
// Sort by confidence descending
|
||||
detections.sort((a, b) => b.confidence.compareTo(a.confidence));
|
||||
|
||||
final List<_Detection> selected = [];
|
||||
final List<bool> active = List.filled(detections.length, true);
|
||||
|
||||
for (int i = 0; i < detections.length; i++) {
|
||||
if (!active[i]) continue;
|
||||
|
||||
selected.add(detections[i]);
|
||||
|
||||
for (int j = i + 1; j < detections.length; j++) {
|
||||
if (!active[j]) continue;
|
||||
|
||||
if (_iou(detections[i], detections[j]) > 0.45) {
|
||||
active[j] = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return selected;
|
||||
}
|
||||
|
||||
double _iou(_Detection a, _Detection b) {
|
||||
final double areaA = a.w * a.h;
|
||||
final double areaB = b.w * b.h;
|
||||
|
||||
final double x1 = math.max(a.x - a.w / 2, b.x - b.w / 2);
|
||||
final double y1 = math.max(a.y - a.h / 2, b.y - b.h / 2);
|
||||
final double x2 = math.min(a.x + a.w / 2, b.x + b.w / 2);
|
||||
final double y2 = math.min(a.y + a.h / 2, b.y + b.h / 2);
|
||||
|
||||
final double intersection = math.max(0.0, x2 - x1) * math.max(0.0, y2 - y1);
|
||||
return intersection / (areaA + areaB - intersection);
|
||||
}
|
||||
|
||||
Uint8List _imageToByteListFloat32(img.Image image, int inputSize) {
|
||||
var convertedBytes = Float32List(1 * inputSize * inputSize * 3);
|
||||
var buffer = Float32List.view(convertedBytes.buffer);
|
||||
int pixelIndex = 0;
|
||||
for (int i = 0; i < inputSize; i++) {
|
||||
for (int j = 0; j < inputSize; j++) {
|
||||
var pixel = image.getPixel(j, i);
|
||||
buffer[pixelIndex++] = (pixel.r / 255.0);
|
||||
buffer[pixelIndex++] = (pixel.g / 255.0);
|
||||
buffer[pixelIndex++] = (pixel.b / 255.0);
|
||||
}
|
||||
}
|
||||
return convertedBytes.buffer.asUint8List();
|
||||
}
|
||||
}
|
||||
|
||||
class _Detection {
|
||||
final double x, y, w, h, confidence;
|
||||
_Detection({
|
||||
required this.x,
|
||||
required this.y,
|
||||
required this.w,
|
||||
required this.h,
|
||||
required this.confidence,
|
||||
});
|
||||
}
|
||||
@@ -7,6 +7,7 @@ list(APPEND FLUTTER_PLUGIN_LIST
|
||||
)
|
||||
|
||||
list(APPEND FLUTTER_FFI_PLUGIN_LIST
|
||||
tflite_flutter
|
||||
)
|
||||
|
||||
set(PLUGIN_BUNDLED_LIBRARIES)
|
||||
|
||||
70
pubspec.lock
70
pubspec.lock
@@ -25,6 +25,14 @@ packages:
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "2.1.2"
|
||||
change_case:
|
||||
dependency: transitive
|
||||
description:
|
||||
name: change_case
|
||||
sha256: e41ef3df58521194ef8d7649928954805aeb08061917cf658322305e61568003
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "2.2.0"
|
||||
characters:
|
||||
dependency: transitive
|
||||
description:
|
||||
@@ -61,10 +69,10 @@ packages:
|
||||
dependency: transitive
|
||||
description:
|
||||
name: cross_file
|
||||
sha256: "701dcfc06da0882883a2657c445103380e53e647060ad8d9dfb710c100996608"
|
||||
sha256: "28bb3ae56f117b5aec029d702a90f57d285cd975c3c5c281eaca38dbc47c5937"
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "0.3.5+1"
|
||||
version: "0.3.5+2"
|
||||
crypto:
|
||||
dependency: transitive
|
||||
description:
|
||||
@@ -81,6 +89,14 @@ packages:
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "1.0.8"
|
||||
dartcv4:
|
||||
dependency: transitive
|
||||
description:
|
||||
name: dartcv4
|
||||
sha256: "43dba49162662f3b6e3daf5a95d071429365e2f1ada67d412b851fc9be442e58"
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "2.2.1+1"
|
||||
equatable:
|
||||
dependency: transitive
|
||||
description:
|
||||
@@ -200,6 +216,14 @@ packages:
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "2.1.3"
|
||||
google_mlkit_document_scanner:
|
||||
dependency: "direct main"
|
||||
description:
|
||||
name: google_mlkit_document_scanner
|
||||
sha256: "67428ddb853880c8185049a5834cd328e6420921a74786f6aadee0b76f8536bd"
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "0.2.1"
|
||||
hooks:
|
||||
dependency: transitive
|
||||
description:
|
||||
@@ -244,10 +268,10 @@ packages:
|
||||
dependency: transitive
|
||||
description:
|
||||
name: image_picker_android
|
||||
sha256: "5e9bf126c37c117cf8094215373c6d561117a3cfb50ebc5add1a61dc6e224677"
|
||||
sha256: "518a16108529fc18657a3e6dde4a043dc465d16596d20ab2abd49a4cac2e703d"
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "0.8.13+10"
|
||||
version: "0.8.13+13"
|
||||
image_picker_for_web:
|
||||
dependency: transitive
|
||||
description:
|
||||
@@ -260,10 +284,10 @@ packages:
|
||||
dependency: transitive
|
||||
description:
|
||||
name: image_picker_ios
|
||||
sha256: "956c16a42c0c708f914021666ffcd8265dde36e673c9fa68c81f7d085d9774ad"
|
||||
sha256: b9c4a438a9ff4f60808c9cf0039b93a42bb6c2211ef6ebb647394b2b3fa84588
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "0.8.13+3"
|
||||
version: "0.8.13+6"
|
||||
image_picker_linux:
|
||||
dependency: transitive
|
||||
description:
|
||||
@@ -384,6 +408,14 @@ packages:
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "0.17.4"
|
||||
native_toolchain_cmake:
|
||||
dependency: transitive
|
||||
description:
|
||||
name: native_toolchain_cmake
|
||||
sha256: fe40e8483183ced98e851e08a9cd2a547fd412cccab98277aa23f2377e43d66f
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "0.2.4"
|
||||
nested:
|
||||
dependency: transitive
|
||||
description:
|
||||
@@ -392,6 +424,14 @@ packages:
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "1.0.0"
|
||||
opencv_dart:
|
||||
dependency: "direct main"
|
||||
description:
|
||||
name: opencv_dart
|
||||
sha256: c2b7cc614cad69c2857e9b684e3066af662a03fe7100f4dc9a630e81ad42103a
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "2.2.1+1"
|
||||
path:
|
||||
dependency: "direct main"
|
||||
description:
|
||||
@@ -496,6 +536,14 @@ packages:
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "2.2.0"
|
||||
quiver:
|
||||
dependency: transitive
|
||||
description:
|
||||
name: quiver
|
||||
sha256: ea0b925899e64ecdfbf9c7becb60d5b50e706ade44a85b2363be2a22d88117d2
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "3.2.2"
|
||||
sky_engine:
|
||||
dependency: transitive
|
||||
description: flutter
|
||||
@@ -613,6 +661,14 @@ packages:
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "0.7.9"
|
||||
tflite_flutter:
|
||||
dependency: "direct main"
|
||||
description:
|
||||
name: tflite_flutter
|
||||
sha256: ffb8651fdb116ab0131d6dc47ff73883e0f634ad1ab12bb2852eef1bbeab4a6a
|
||||
url: "https://pub.dev"
|
||||
source: hosted
|
||||
version: "0.10.4"
|
||||
typed_data:
|
||||
dependency: transitive
|
||||
description:
|
||||
@@ -679,4 +735,4 @@ packages:
|
||||
version: "3.1.3"
|
||||
sdks:
|
||||
dart: ">=3.12.0-35.0.dev <4.0.0"
|
||||
flutter: ">=3.35.0"
|
||||
flutter: ">=3.38.1"
|
||||
|
||||
@@ -35,11 +35,11 @@ dependencies:
|
||||
# Use with the CupertinoIcons class for iOS style icons.
|
||||
cupertino_icons: ^1.0.8
|
||||
|
||||
# Image processing with OpenCV (disabled for now due to build issues)
|
||||
# opencv_dart: ^2.1.0
|
||||
opencv_dart: ^2.1.0
|
||||
|
||||
# Image capture from camera/gallery
|
||||
image_picker: ^1.0.7
|
||||
image_picker: ^1.2.1
|
||||
google_mlkit_document_scanner: ^0.2.0
|
||||
|
||||
# Local database for history
|
||||
sqflite: ^2.3.2
|
||||
@@ -64,6 +64,9 @@ dependencies:
|
||||
# Image processing for impact detection
|
||||
image: ^4.1.7
|
||||
|
||||
# Machine Learning for YOLOv8
|
||||
tflite_flutter: ^0.10.4
|
||||
|
||||
dev_dependencies:
|
||||
flutter_test:
|
||||
sdk: flutter
|
||||
|
||||
12
tests/find_homography_test.dart
Normal file
12
tests/find_homography_test.dart
Normal file
@@ -0,0 +1,12 @@
|
||||
import 'package:opencv_dart/opencv_dart.dart' as cv;
|
||||
|
||||
void main() {
|
||||
var p1 = cv.VecPoint.fromList([cv.Point(0, 0), cv.Point(1, 1)]);
|
||||
var p2 = cv.VecPoint2f.fromList([cv.Point2f(0, 0), cv.Point2f(1, 1)]);
|
||||
|
||||
// Is it p1.mat ?
|
||||
// Or is it cv.findHomography(p1, p1) but actually needs specific types ?
|
||||
cv.Mat mat1 = cv.Mat.fromVec(p1);
|
||||
cv.Mat mat2 = cv.Mat.fromVec(p2);
|
||||
cv.findHomography(mat1, mat2);
|
||||
}
|
||||
7
tests/opencv_quad_test.dart
Normal file
7
tests/opencv_quad_test.dart
Normal file
@@ -0,0 +1,7 @@
|
||||
import 'package:opencv_dart/opencv_dart.dart' as cv;
|
||||
|
||||
void main() {
|
||||
print(cv.approxPolyDP);
|
||||
print(cv.arcLength);
|
||||
print(cv.contourArea);
|
||||
}
|
||||
5
tests/test_homography.dart
Normal file
5
tests/test_homography.dart
Normal file
@@ -0,0 +1,5 @@
|
||||
import 'package:opencv_dart/opencv_dart.dart' as cv;
|
||||
|
||||
void main() {
|
||||
print(cv.findHomography);
|
||||
}
|
||||
@@ -7,6 +7,7 @@ list(APPEND FLUTTER_PLUGIN_LIST
|
||||
)
|
||||
|
||||
list(APPEND FLUTTER_FFI_PLUGIN_LIST
|
||||
tflite_flutter
|
||||
)
|
||||
|
||||
set(PLUGIN_BUNDLED_LIBRARIES)
|
||||
|
||||
Reference in New Issue
Block a user