8 Commits

29 changed files with 2067 additions and 240 deletions

31
analyze_log.txt Normal file
View File

@@ -0,0 +1,31 @@
Analyzing bully...
info - Statements in an if should be enclosed in a block - lib\features\analysis\analysis_screen.dart:122:17 - curly_braces_in_flow_control_structures
info - 'withOpacity' is deprecated and shouldn't be used. Use .withValues() to avoid precision loss - lib\features\analysis\analysis_screen.dart:650:51 - deprecated_member_use
warning - The declaration '_showAddShotHint' isn't referenced - lib\features\analysis\analysis_screen.dart:1083:8 - unused_element
warning - The declaration '_showAutoDetectDialog' isn't referenced - lib\features\analysis\analysis_screen.dart:1120:8 - unused_element
warning - Unused import: 'widgets/target_type_selector.dart' - lib\features\capture\capture_screen.dart:16:8 - unused_import
info - The private field _selectedType could be 'final' - lib\features\capture\capture_screen.dart:28:14 - prefer_final_fields
info - 'scale' is deprecated and shouldn't be used. Use scaleByVector3, scaleByVector4, or scaleByDouble instead - lib\features\crop\crop_screen.dart:141:25 - deprecated_member_use
info - The import of 'package:flutter/foundation.dart' is unnecessary because all of the used elements are also provided by the import of 'package:flutter/material.dart' - lib\features\statistics\statistics_screen.dart:8:8 - unnecessary_import
warning - The declaration '_buildLegendItem' isn't referenced - lib\features\statistics\statistics_screen.dart:309:10 - unused_element
info - Unnecessary use of string interpolation - lib\features\statistics\statistics_screen.dart:408:15 - unnecessary_string_interpolations
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:192:7 - avoid_print
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:239:7 - avoid_print
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:246:9 - avoid_print
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:278:9 - avoid_print
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:289:11 - avoid_print
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:292:11 - avoid_print
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:297:9 - avoid_print
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:332:7 - avoid_print
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:336:7 - avoid_print
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:683:7 - avoid_print
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:725:7 - avoid_print
info - Don't invoke 'print' in production code - lib\services\image_processing_service.dart:736:7 - avoid_print
warning - The declaration '_detectDarkSpotsAdaptive' isn't referenced - lib\services\image_processing_service.dart:780:15 - unused_element
info - Don't invoke 'print' in production code - lib\services\opencv_impact_detection_service.dart:104:5 - avoid_print
info - Don't invoke 'print' in production code - lib\services\opencv_impact_detection_service.dart:116:5 - avoid_print
info - Don't invoke 'print' in production code - lib\services\target_detection_service.dart:297:7 - avoid_print
info - Don't invoke 'print' in production code - lib\services\target_detection_service.dart:342:7 - avoid_print
27 issues found. (ran in 1.9s)

BIN
analyze_opencv.txt Normal file

Binary file not shown.

View File

@@ -1,4 +1,6 @@
<manifest xmlns:android="http://schemas.android.com/apk/res/android">
<uses-permission android:name="android.permission.CAMERA" />
<application
android:label="bully"
android:name="${applicationName}"

20
build_log.txt Normal file
View File

@@ -0,0 +1,20 @@
Running Gradle task 'assembleDebug'...
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:processDebugResources'.
> A failure occurred while executing com.android.build.gradle.internal.res.LinkApplicationAndroidResourcesTask$TaskAction
> Android resource linking failed
ERROR: C:\Users\streaper2\Documents\00 - projet\bully\build\cunning_document_scanner\intermediates\merged_manifest\debug\processDebugManifest\AndroidManifest.xml:9:5-65: AAPT: error: unexpected element <uses-permission> found in <manifest><application>.
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
> Get more help at https://help.gradle.org.
BUILD FAILED in 5s
Running Gradle task 'assembleDebug'... 5,4s
Gradle task assembleDebug failed with exit code 1

26
docs/README.md Normal file
View File

@@ -0,0 +1,26 @@
# Documentation du Projet Bully
Bienvenue dans la documentation développeur de l'application **Bully**.
Ce projet est une application Flutter d'analyse de cibles de tir (Impact Detection).
## Architecture
Le code source est organisé dans le dossier `lib/` selon les couches suivantes :
- **Features (`lib/features`)** : Contient les écrans et la logique UI (Vues/Pages). C'est ici que réside l'interface utilisateur.
- **Services (`lib/services`)** : Services "métier" et utilitaires (traitement d'image, calculs, etc.). Indépendant de l'UI.
- **Data (`lib/data`)** : Gestion des données (Modèles, Base de données locale, Repositories).
## Sections de la Documentation
Pour plus de détails sur chaque partie, consultez les sections dédiées :
- 🏗️ **[Services (Logique Métier)](services/README.md)** : Documentation des services comme le traitement d'image et le calcul de score.
- 📱 **[Vues & Features (UI)](features/README.md)** : Documentation des écrans principaux (ex: Analyse).
- 💾 **[Base de Données & Modèles](data/README.md)** : Structure des données et persistance.
## Pour commencer
1. Assurez-vous d'avoir Flutter installé.
2. Lancez `flutter run` pour démarrer l'application.

17
docs/data/README.md Normal file
View File

@@ -0,0 +1,17 @@
# Data & Persistance
Cette couche gère la sauvegarde et la récupération des données.
## Base de Données
L'application utilise une base de données locale (probablement SQLite/Drift ou Hive, à vérifier dans `lib/data/database`).
## Modèles (`lib/data/models`)
Les classes représentant les objets métier persistés.
Exemples probables :
- `Session` : Une session de tir.
- `Impact` : Un impact de balle sur la cible.
- `Target` : Configuration d'une cible.
## Repositories (`lib/data/repositories`)
Le pattern Repository est utilisé pour abstraire la source de données (DB locale, API distante, etc.) du reste de l'application.

17
docs/features/README.md Normal file
View File

@@ -0,0 +1,17 @@
# Features & Vues
Cette section documente les écrans principaux de l'application et leur organisation.
## Écrans Principaux
### Analysis (`lib/features/analysis`)
C'est le cœur de l'application. Il permet à l'utilisateur de prendre une photo ou choisir une image pour analyser les impacts.
- **AnalysisScreen** (`analysis_screen.dart`): L'écran principal qui orchestre la capture et l'affichage des résultats.
- **AnalysisProvider** (`analysis_provider.dart`): Gestionnaire d'état (State Management) pour cet écran. Il fait le pont entre la vue et les services.
## Structure d'une Feature
Chaque feature est généralement composée de :
- `_screen.dart` : Le Widget de la page.
- `_provider.dart` : La logique d'état (ChangeNotifier, Bloc, etc.).
- `widgets/` : Widgets spécifiques à cette feature.

20
docs/services/README.md Normal file
View File

@@ -0,0 +1,20 @@
# Services
Les services contiennent la logique métier de l'application, isolée de l'interface utilisateur.
## Liste des Services Principaux
| Service | Description | Fichier |
| :--- | :--- | :--- |
| **ImageProcessingService** | Gère le traitement lourd des images (filtres, détection). | `lib/services/image_processing_service.dart` |
| **DistortionCorrection** | Corrige la distorsion de perspective des cibles. | `lib/services/distortion_correction_service.dart` |
| **ScoreCalculator** | Calcule le score en fonction des impacts détectés. | `lib/services/score_calculator_service.dart` |
| **StatisticsService** | Génère des statistiques sur les sessions de tir. | `lib/services/statistics_service.dart` |
## Exemple d'utilisation (Fictif)
```dart
// Exemple d'appel au service de calcul de score
final calculator = ScoreCalculatorService();
final score = calculator.calculate(impacts);
```

View File

@@ -2,6 +2,8 @@
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>NSCameraUsageDescription</key>
<string>This app needs camera access to scan documents</string>
<key>CADisableMinimumFrameDurationOnPhone</key>
<true/>
<key>CFBundleDevelopmentRegion</key>

View File

@@ -17,6 +17,7 @@ import '../../services/target_detection_service.dart';
import '../../services/score_calculator_service.dart';
import '../../services/grouping_analyzer_service.dart';
import '../../services/distortion_correction_service.dart';
import '../../services/opencv_target_service.dart';
enum AnalysisState { initial, loading, success, error }
@@ -26,6 +27,7 @@ class AnalysisProvider extends ChangeNotifier {
final GroupingAnalyzerService _groupingAnalyzerService;
final SessionRepository _sessionRepository;
final DistortionCorrectionService _distortionService;
final OpenCVTargetService _opencvTargetService;
final Uuid _uuid = const Uuid();
AnalysisProvider({
@@ -34,11 +36,13 @@ class AnalysisProvider extends ChangeNotifier {
required GroupingAnalyzerService groupingAnalyzerService,
required SessionRepository sessionRepository,
DistortionCorrectionService? distortionService,
OpenCVTargetService? opencvTargetService,
}) : _detectionService = detectionService,
_scoreCalculatorService = scoreCalculatorService,
_groupingAnalyzerService = groupingAnalyzerService,
_sessionRepository = sessionRepository,
_distortionService = distortionService ?? DistortionCorrectionService();
_distortionService = distortionService ?? DistortionCorrectionService(),
_opencvTargetService = opencvTargetService ?? OpenCVTargetService();
AnalysisState _state = AnalysisState.initial;
String? _errorMessage;
@@ -49,6 +53,7 @@ class AnalysisProvider extends ChangeNotifier {
double _targetCenterX = 0.5;
double _targetCenterY = 0.5;
double _targetRadius = 0.4;
double _targetInnerRadius = 0.04;
int _ringCount = 10;
List<double>? _ringRadii; // Individual ring radii multipliers
double _imageAspectRatio = 1.0; // width / height
@@ -79,6 +84,7 @@ class AnalysisProvider extends ChangeNotifier {
double get targetCenterX => _targetCenterX;
double get targetCenterY => _targetCenterY;
double get targetRadius => _targetRadius;
double get targetInnerRadius => _targetInnerRadius;
int get ringCount => _ringCount;
List<double>? get ringRadii =>
_ringRadii != null ? List.unmodifiable(_ringRadii!) : null;
@@ -134,6 +140,7 @@ class AnalysisProvider extends ChangeNotifier {
_targetCenterX = 0.5;
_targetCenterY = 0.5;
_targetRadius = 0.4;
_targetInnerRadius = 0.04;
// Initialize empty shots list
_shots = [];
@@ -156,6 +163,7 @@ class AnalysisProvider extends ChangeNotifier {
_targetCenterX = result.centerX;
_targetCenterY = result.centerY;
_targetRadius = result.radius;
_targetInnerRadius = result.radius * 0.1;
// Create shots from detected impacts
_shots = result.impacts.map((impact) {
@@ -484,12 +492,14 @@ class AnalysisProvider extends ChangeNotifier {
void adjustTargetPosition(
double centerX,
double centerY,
double innerRadius,
double radius, {
int? ringCount,
List<double>? ringRadii,
}) {
_targetCenterX = centerX;
_targetCenterY = centerY;
_targetInnerRadius = innerRadius;
_targetRadius = radius;
if (ringCount != null) {
_ringCount = ringCount;
@@ -508,6 +518,43 @@ class AnalysisProvider extends ChangeNotifier {
notifyListeners();
}
/// Auto-calibrate target using OpenCV
Future<bool> autoCalibrateTarget() async {
if (_imagePath == null) return false;
try {
// 1. Attempt to correct perspective/distortion first
final correctedPath = await _distortionService
.correctPerspectiveWithConcentricMesh(_imagePath!);
if (correctedPath != _imagePath) {
_imagePath = correctedPath;
_correctedImagePath = correctedPath;
_distortionCorrectionEnabled = true;
_imageAspectRatio =
1.0; // The corrected image is always square (side x side)
notifyListeners();
}
// 2. Detect the target on the straight/corrected image
final result = await _opencvTargetService.detectTarget(_imagePath!);
if (result.success) {
adjustTargetPosition(
result.centerX,
result.centerY,
result.radius * 0.1,
result.radius,
);
return true;
}
return false;
} catch (e) {
print('Auto-calibration error: $e');
return false;
}
}
/// Calcule les paramètres de distorsion basés sur la calibration actuelle
void calculateDistortion() {
_distortionParams = _distortionService.calculateDistortionFromCalibration(
@@ -665,6 +712,7 @@ class AnalysisProvider extends ChangeNotifier {
_targetCenterX = 0.5;
_targetCenterY = 0.5;
_targetRadius = 0.4;
_targetInnerRadius = 0.04;
_ringCount = 10;
_ringRadii = null;
_imageAspectRatio = 1.0;

View File

@@ -118,8 +118,9 @@ class _AnalysisScreenContentState extends State<_AnalysisScreenContent> {
actions: [
Consumer<AnalysisProvider>(
builder: (context, provider, _) {
if (provider.state != AnalysisState.success)
if (provider.state != AnalysisState.success) {
return const SizedBox.shrink();
}
return IconButton(
icon: Icon(_isCalibrating ? Icons.check : Icons.tune),
onPressed: () {
@@ -273,6 +274,68 @@ class _AnalysisScreenContentState extends State<_AnalysisScreenContent> {
),
child: Column(
children: [
// Auto-calibrate button
SizedBox(
width: double.infinity,
child: ElevatedButton.icon(
onPressed: () async {
ScaffoldMessenger.of(context).showSnackBar(
const SnackBar(
content: Row(
children: [
SizedBox(
width: 20,
height: 20,
child: CircularProgressIndicator(
strokeWidth: 2,
color: Colors.white,
),
),
SizedBox(width: 12),
Text('Auto-calibration en cours...'),
],
),
duration: Duration(seconds: 2),
),
);
final success = await provider
.autoCalibrateTarget();
if (context.mounted) {
ScaffoldMessenger.of(
context,
).hideCurrentSnackBar();
if (success) {
ScaffoldMessenger.of(context).showSnackBar(
const SnackBar(
content: Text(
'Cible calibrée automatiquement',
),
backgroundColor: AppTheme.successColor,
),
);
} else {
ScaffoldMessenger.of(context).showSnackBar(
const SnackBar(
content: Text(
'Échec de la calibration auto',
),
backgroundColor: AppTheme.errorColor,
),
);
}
}
},
icon: const Icon(Icons.auto_fix_high),
label: const Text('Auto-Calibrer la Cible'),
style: ElevatedButton.styleFrom(
backgroundColor: Colors.deepPurple,
foregroundColor: Colors.white,
),
),
),
const SizedBox(height: 16),
// Ring count slider
Row(
children: [
@@ -298,6 +361,7 @@ class _AnalysisScreenContentState extends State<_AnalysisScreenContent> {
provider.adjustTargetPosition(
provider.targetCenterX,
provider.targetCenterY,
provider.targetInnerRadius,
provider.targetRadius,
ringCount: value.round(),
);
@@ -348,6 +412,7 @@ class _AnalysisScreenContentState extends State<_AnalysisScreenContent> {
provider.adjustTargetPosition(
provider.targetCenterX,
provider.targetCenterY,
provider.targetInnerRadius,
value,
ringCount: provider.ringCount,
);
@@ -375,7 +440,7 @@ class _AnalysisScreenContentState extends State<_AnalysisScreenContent> {
),
const Divider(color: Colors.white24, height: 16),
// Distortion correction row
Row(
/*Row(
children: [
const Icon(
Icons.lens_blur,
@@ -440,19 +505,19 @@ class _AnalysisScreenContentState extends State<_AnalysisScreenContent> {
),
),
const SizedBox(width: 8),
Switch(
/*Switch(
value: provider.distortionCorrectionEnabled,
onChanged: (value) => provider
.setDistortionCorrectionEnabled(value),
activeTrackColor: AppTheme.primaryColor
.withValues(alpha: 0.5),
activeThumbColor: AppTheme.primaryColor,
),
),*/
],
),
],
],
),
),*/
],
),
),
@@ -472,6 +537,7 @@ class _AnalysisScreenContentState extends State<_AnalysisScreenContent> {
initialCenterX: provider.targetCenterX,
initialCenterY: provider.targetCenterY,
initialRadius: provider.targetRadius,
initialInnerRadius: provider.targetInnerRadius,
initialRingCount: provider.ringCount,
initialRingRadii: provider.ringRadii,
targetType: provider.targetType!,
@@ -479,6 +545,7 @@ class _AnalysisScreenContentState extends State<_AnalysisScreenContent> {
(
centerX,
centerY,
innerRadius,
radius,
ringCount, {
List<double>? ringRadii,
@@ -486,6 +553,7 @@ class _AnalysisScreenContentState extends State<_AnalysisScreenContent> {
provider.adjustTargetPosition(
centerX,
centerY,
innerRadius,
radius,
ringCount: ringCount,
ringRadii: ringRadii,
@@ -647,7 +715,7 @@ class _AnalysisScreenContentState extends State<_AnalysisScreenContent> {
boxShadow: [
if (!_isAtBottom)
BoxShadow(
color: Colors.black.withOpacity(0.2),
color: Colors.black.withValues(alpha: 0.2),
blurRadius: 6,
offset: const Offset(0, 3),
),
@@ -1080,6 +1148,7 @@ class _AnalysisScreenContentState extends State<_AnalysisScreenContent> {
);
}
/*
void _showAddShotHint(BuildContext context) {
ScaffoldMessenger.of(context).showSnackBar(
const SnackBar(
@@ -1088,6 +1157,7 @@ class _AnalysisScreenContentState extends State<_AnalysisScreenContent> {
),
);
}
*/
void _showClearConfirmation(BuildContext context, AnalysisProvider provider) {
showDialog(
@@ -1117,6 +1187,7 @@ class _AnalysisScreenContentState extends State<_AnalysisScreenContent> {
);
}
/*
void _showAutoDetectDialog(BuildContext context, AnalysisProvider provider) {
// Detection settings
bool clearExisting = true;
@@ -1315,6 +1386,7 @@ class _AnalysisScreenContentState extends State<_AnalysisScreenContent> {
),
);
}
*/
void _showCalibratedDetectionDialog(
BuildContext context,

View File

@@ -13,16 +13,26 @@ class TargetCalibration extends StatefulWidget {
final double initialCenterX;
final double initialCenterY;
final double initialRadius;
final double initialInnerRadius;
final int initialRingCount;
final TargetType targetType;
final List<double>? initialRingRadii;
final Function(double centerX, double centerY, double radius, int ringCount, {List<double>? ringRadii}) onCalibrationChanged;
final Function(
double centerX,
double centerY,
double innerRadius,
double radius,
int ringCount, {
List<double>? ringRadii,
})
onCalibrationChanged;
const TargetCalibration({
super.key,
required this.initialCenterX,
required this.initialCenterY,
required this.initialRadius,
required this.initialInnerRadius,
this.initialRingCount = 10,
required this.targetType,
this.initialRingRadii,
@@ -37,11 +47,13 @@ class _TargetCalibrationState extends State<TargetCalibration> {
late double _centerX;
late double _centerY;
late double _radius;
late double _innerRadius;
late int _ringCount;
late List<double> _ringRadii;
bool _isDraggingCenter = false;
bool _isDraggingRadius = false;
bool _isDraggingInnerRadius = false;
@override
void initState() {
@@ -49,28 +61,57 @@ class _TargetCalibrationState extends State<TargetCalibration> {
_centerX = widget.initialCenterX;
_centerY = widget.initialCenterY;
_radius = widget.initialRadius;
_innerRadius = widget.initialInnerRadius;
_ringCount = widget.initialRingCount;
_initRingRadii();
}
void _initRingRadii() {
if (widget.initialRingRadii != null && widget.initialRingRadii!.length == _ringCount) {
if (widget.initialRingRadii != null &&
widget.initialRingRadii!.length == _ringCount) {
_ringRadii = List.from(widget.initialRingRadii!);
} else {
// Initialize with default proportional radii
_ringRadii = List.generate(_ringCount, (i) => (i + 1) / _ringCount);
// Initialize with default proportional radii interpolated between inner and outer
_ringRadii = List.generate(_ringCount, (i) {
if (_ringCount <= 1) return 1.0;
final ratio = _innerRadius / _radius;
return ratio + (1.0 - ratio) * i / (_ringCount - 1);
});
}
}
@override
void didUpdateWidget(TargetCalibration oldWidget) {
super.didUpdateWidget(oldWidget);
bool shouldReinit = false;
if (widget.initialCenterX != oldWidget.initialCenterX &&
!_isDraggingCenter) {
_centerX = widget.initialCenterX;
}
if (widget.initialCenterY != oldWidget.initialCenterY &&
!_isDraggingCenter) {
_centerY = widget.initialCenterY;
}
if (widget.initialRingCount != oldWidget.initialRingCount) {
_ringCount = widget.initialRingCount;
_initRingRadii();
shouldReinit = true;
}
if (widget.initialRadius != oldWidget.initialRadius && !_isDraggingRadius) {
_radius = widget.initialRadius;
shouldReinit = true;
}
if (widget.initialInnerRadius != oldWidget.initialInnerRadius &&
!_isDraggingInnerRadius) {
_innerRadius = widget.initialInnerRadius;
shouldReinit = true;
}
if (widget.initialRingRadii != oldWidget.initialRingRadii) {
shouldReinit = true;
}
if (shouldReinit) {
_initRingRadii();
}
}
@@ -90,11 +131,13 @@ class _TargetCalibrationState extends State<TargetCalibration> {
centerX: _centerX,
centerY: _centerY,
radius: _radius,
innerRadius: _innerRadius,
ringCount: _ringCount,
ringRadii: _ringRadii,
targetType: widget.targetType,
isDraggingCenter: _isDraggingCenter,
isDraggingRadius: _isDraggingRadius,
isDraggingInnerRadius: _isDraggingInnerRadius,
),
),
);
@@ -109,21 +152,42 @@ class _TargetCalibrationState extends State<TargetCalibration> {
// Check if tapping on center handle
final distToCenter = _distance(tapX, tapY, _centerX, _centerY);
// Check if tapping on radius handle (on the right edge of the outermost circle)
// Check if tapping on outer radius handle
final minDim = math.min(size.width, size.height);
final outerRadius = _radius * (_ringRadii.isNotEmpty ? _ringRadii.last : 1.0);
final outerRadius = _radius;
final radiusHandleX = _centerX + outerRadius * minDim / size.width;
final radiusHandleY = _centerY;
final distToRadiusHandle = _distance(tapX, tapY, radiusHandleX.clamp(0.0, 1.0), radiusHandleY.clamp(0.0, 1.0));
final distToOuterHandle = _distance(
tapX,
tapY,
radiusHandleX.clamp(0.0, 1.0),
radiusHandleY.clamp(0.0, 1.0),
);
// Check if tapping on inner radius handle (top edge of innermost circle)
final actualInnerRadius = _innerRadius;
final innerHandleX = _centerX;
final innerHandleY = _centerY - actualInnerRadius * minDim / size.height;
final distToInnerHandle = _distance(
tapX,
tapY,
innerHandleX.clamp(0.0, 1.0),
innerHandleY.clamp(0.0, 1.0),
);
// Increase touch target size slightly for handles
if (distToCenter < 0.05) {
setState(() {
_isDraggingCenter = true;
});
} else if (distToRadiusHandle < 0.05) {
} else if (distToOuterHandle < 0.05) {
setState(() {
_isDraggingRadius = true;
});
} else if (distToInnerHandle < 0.05) {
setState(() {
_isDraggingInnerRadius = true;
});
} else if (distToCenter < _radius + 0.02) {
// Tapping inside the target - move center
setState(() {
@@ -143,19 +207,36 @@ class _TargetCalibrationState extends State<TargetCalibration> {
_centerX = _centerX + deltaX;
_centerY = _centerY + deltaY;
} else if (_isDraggingRadius) {
// Adjust outer radius (scales all rings proportionally)
// Adjust outer radius
final newRadius = _radius + deltaX * (size.width / minDim);
_radius = newRadius.clamp(0.05, 3.0);
_radius = newRadius.clamp(math.max(0.05, _innerRadius + 0.01), 3.0);
_initRingRadii(); // Recalculate linear separation
} else if (_isDraggingInnerRadius) {
// Adjust inner radius (sliding up reduces Y, so deltaY is negative when growing. Thus we subtract deltaY)
final newInnerRadius = _innerRadius - deltaY * (size.height / minDim);
_innerRadius = newInnerRadius.clamp(
0.01,
math.max(0.01, _radius - 0.01),
);
_initRingRadii(); // Recalculate linear separation
}
});
widget.onCalibrationChanged(_centerX, _centerY, _radius, _ringCount, ringRadii: _ringRadii);
widget.onCalibrationChanged(
_centerX,
_centerY,
_innerRadius,
_radius,
_ringCount,
ringRadii: _ringRadii,
);
}
void _onPanEnd() {
setState(() {
_isDraggingCenter = false;
_isDraggingRadius = false;
_isDraggingInnerRadius = false;
});
}
@@ -170,21 +251,25 @@ class _CalibrationPainter extends CustomPainter {
final double centerX;
final double centerY;
final double radius;
final double innerRadius;
final int ringCount;
final List<double> ringRadii;
final TargetType targetType;
final bool isDraggingCenter;
final bool isDraggingRadius;
final bool isDraggingInnerRadius;
_CalibrationPainter({
required this.centerX,
required this.centerY,
required this.radius,
required this.innerRadius,
required this.ringCount,
required this.ringRadii,
required this.targetType,
required this.isDraggingCenter,
required this.isDraggingRadius,
required this.isDraggingInnerRadius,
});
@override
@@ -192,6 +277,7 @@ class _CalibrationPainter extends CustomPainter {
final centerPx = Offset(centerX * size.width, centerY * size.height);
final minDim = size.width < size.height ? size.width : size.height;
final baseRadiusPx = radius * minDim;
final innerRadiusPx = innerRadius * minDim;
if (targetType == TargetType.concentric) {
_drawConcentricZones(canvas, size, centerPx, baseRadiusPx);
@@ -199,17 +285,42 @@ class _CalibrationPainter extends CustomPainter {
_drawSilhouetteZones(canvas, size, centerPx, baseRadiusPx);
}
// Fullscreen crosshairs when dragging center
if (isDraggingCenter) {
final crosshairLinePaint = Paint()
..color = AppTheme.successColor.withValues(alpha: 0.5)
..strokeWidth = 1;
canvas.drawLine(
Offset(0, centerPx.dy),
Offset(size.width, centerPx.dy),
crosshairLinePaint,
);
canvas.drawLine(
Offset(centerPx.dx, 0),
Offset(centerPx.dx, size.height),
crosshairLinePaint,
);
}
// Draw center handle
_drawCenterHandle(canvas, centerPx);
// Draw radius handle (for outer ring)
_drawRadiusHandle(canvas, size, centerPx, baseRadiusPx);
// Draw inner radius handle
_drawInnerRadiusHandle(canvas, size, centerPx, innerRadiusPx);
// Draw instructions
_drawInstructions(canvas, size);
}
void _drawConcentricZones(Canvas canvas, Size size, Offset center, double baseRadius) {
void _drawConcentricZones(
Canvas canvas,
Size size,
Offset center,
double baseRadius,
) {
// Generate colors for zones
List<Color> zoneColors = [];
for (int i = 0; i < ringCount; i++) {
@@ -235,7 +346,9 @@ class _CalibrationPainter extends CustomPainter {
// Draw from outside to inside
for (int i = ringCount - 1; i >= 0; i--) {
final ringRadius = ringRadii.length > i ? ringRadii[i] : (i + 1) / ringCount;
final ringRadius = ringRadii.length > i
? ringRadii[i]
: (i + 1) / ringCount;
final zoneRadius = baseRadius * ringRadius;
zonePaint.color = zoneColors[i];
@@ -244,12 +357,12 @@ class _CalibrationPainter extends CustomPainter {
}
// Draw zone labels (only if within visible area)
final textPainter = TextPainter(
textDirection: TextDirection.ltr,
);
final textPainter = TextPainter(textDirection: TextDirection.ltr);
for (int i = 0; i < ringCount; i++) {
final ringRadius = ringRadii.length > i ? ringRadii[i] : (i + 1) / ringCount;
final ringRadius = ringRadii.length > i
? ringRadii[i]
: (i + 1) / ringCount;
final prevRingRadius = i > 0
? (ringRadii.length > i - 1 ? ringRadii[i - 1] : i / ringCount)
: 0.0;
@@ -268,9 +381,7 @@ class _CalibrationPainter extends CustomPainter {
color: Colors.white.withValues(alpha: 0.9),
fontSize: 12,
fontWeight: FontWeight.bold,
shadows: const [
Shadow(color: Colors.black, blurRadius: 2),
],
shadows: const [Shadow(color: Colors.black, blurRadius: 2)],
),
);
textPainter.layout();
@@ -278,14 +389,24 @@ class _CalibrationPainter extends CustomPainter {
// Draw label on the right side of each zone
final labelY = center.dy - textPainter.height / 2;
if (labelY >= 0 && labelY <= size.height) {
textPainter.paint(canvas, Offset(labelX - textPainter.width / 2, labelY));
textPainter.paint(
canvas,
Offset(labelX - textPainter.width / 2, labelY),
);
}
}
}
void _drawSilhouetteZones(Canvas canvas, Size size, Offset center, double radius) {
void _drawSilhouetteZones(
Canvas canvas,
Size size,
Offset center,
double radius,
) {
// Simplified silhouette zones
final paint = Paint()..style = PaintingStyle.stroke..strokeWidth = 2;
final paint = Paint()
..style = PaintingStyle.stroke
..strokeWidth = 2;
// Draw silhouette outline (simplified as rectangle for now)
final silhouetteWidth = radius * 0.8;
@@ -293,7 +414,11 @@ class _CalibrationPainter extends CustomPainter {
paint.color = Colors.green.withValues(alpha: 0.5);
canvas.drawRect(
Rect.fromCenter(center: center, width: silhouetteWidth, height: silhouetteHeight),
Rect.fromCenter(
center: center,
width: silhouetteWidth,
height: silhouetteHeight,
),
paint,
);
}
@@ -316,17 +441,36 @@ class _CalibrationPainter extends CustomPainter {
final crossPaint = Paint()
..color = isDraggingCenter ? AppTheme.successColor : AppTheme.primaryColor
..strokeWidth = 2;
canvas.drawLine(Offset(center.dx - 20, center.dy), Offset(center.dx - 8, center.dy), crossPaint);
canvas.drawLine(Offset(center.dx + 8, center.dy), Offset(center.dx + 20, center.dy), crossPaint);
canvas.drawLine(Offset(center.dx, center.dy - 20), Offset(center.dx, center.dy - 8), crossPaint);
canvas.drawLine(Offset(center.dx, center.dy + 8), Offset(center.dx, center.dy + 20), crossPaint);
canvas.drawLine(
Offset(center.dx - 20, center.dy),
Offset(center.dx - 8, center.dy),
crossPaint,
);
canvas.drawLine(
Offset(center.dx + 8, center.dy),
Offset(center.dx + 20, center.dy),
crossPaint,
);
canvas.drawLine(
Offset(center.dx, center.dy - 20),
Offset(center.dx, center.dy - 8),
crossPaint,
);
canvas.drawLine(
Offset(center.dx, center.dy + 8),
Offset(center.dx, center.dy + 20),
crossPaint,
);
}
void _drawRadiusHandle(Canvas canvas, Size size, Offset center, double baseRadius) {
void _drawRadiusHandle(
Canvas canvas,
Size size,
Offset center,
double baseRadius,
) {
// Radius handle on the right edge of the outermost ring
final outerRingRadius = ringRadii.isNotEmpty ? ringRadii.last : 1.0;
final actualRadius = baseRadius * outerRingRadius;
final actualHandleX = center.dx + actualRadius;
final actualHandleX = center.dx + baseRadius;
final clampedHandleX = actualHandleX.clamp(20.0, size.width - 20);
final clampedHandleY = center.dy.clamp(20.0, size.height - 20);
final handlePos = Offset(clampedHandleX, clampedHandleY);
@@ -376,7 +520,7 @@ class _CalibrationPainter extends CustomPainter {
// Label
final textPainter = TextPainter(
text: const TextSpan(
text: 'RAYON',
text: 'EXT.',
style: TextStyle(
color: Colors.white,
fontSize: 8,
@@ -392,6 +536,78 @@ class _CalibrationPainter extends CustomPainter {
);
}
void _drawInnerRadiusHandle(
Canvas canvas,
Size size,
Offset center,
double innerRadiusPx,
) {
// Inner radius handle on the top edge of the innermost ring
final actualHandleY = center.dy - innerRadiusPx;
final clampedHandleX = center.dx.clamp(20.0, size.width - 20);
final clampedHandleY = actualHandleY.clamp(20.0, size.height - 20);
final handlePos = Offset(clampedHandleX, clampedHandleY);
final isClamped = actualHandleY < 20.0;
final paint = Paint()
..color = isDraggingInnerRadius
? AppTheme.successColor
: (isClamped ? Colors.orange : Colors.purpleAccent)
..style = PaintingStyle.fill;
// Draw handle
canvas.drawCircle(handlePos, 14, paint);
// Up/Down arrow indicators
final arrowPaint = Paint()
..color = Colors.white
..strokeWidth = 2
..style = PaintingStyle.stroke;
// Up arrow
canvas.drawLine(
Offset(handlePos.dx, handlePos.dy - 4),
Offset(handlePos.dx - 4, handlePos.dy - 8),
arrowPaint,
);
canvas.drawLine(
Offset(handlePos.dx, handlePos.dy - 4),
Offset(handlePos.dx + 4, handlePos.dy - 8),
arrowPaint,
);
// Down arrow
canvas.drawLine(
Offset(handlePos.dx, handlePos.dy + 4),
Offset(handlePos.dx - 4, handlePos.dy + 8),
arrowPaint,
);
canvas.drawLine(
Offset(handlePos.dx, handlePos.dy + 4),
Offset(handlePos.dx + 4, handlePos.dy + 8),
arrowPaint,
);
// Label
final textPainter = TextPainter(
text: const TextSpan(
text: 'INT.',
style: TextStyle(
color: Colors.white,
fontSize: 8,
fontWeight: FontWeight.bold,
),
),
textDirection: TextDirection.ltr,
);
textPainter.layout();
textPainter.paint(
canvas,
Offset(handlePos.dx - textPainter.width / 2, handlePos.dy - 24),
);
}
void _drawInstructions(Canvas canvas, Size size) {
const instruction = 'Deplacez le centre ou ajustez le rayon';
@@ -418,9 +634,11 @@ class _CalibrationPainter extends CustomPainter {
return centerX != oldDelegate.centerX ||
centerY != oldDelegate.centerY ||
radius != oldDelegate.radius ||
innerRadius != oldDelegate.innerRadius ||
ringCount != oldDelegate.ringCount ||
isDraggingCenter != oldDelegate.isDraggingCenter ||
isDraggingRadius != oldDelegate.isDraggingRadius ||
isDraggingInnerRadius != oldDelegate.isDraggingInnerRadius ||
ringRadii != oldDelegate.ringRadii;
}
}

View File

@@ -6,13 +6,13 @@
library;
import 'dart:io';
import 'package:google_mlkit_document_scanner/google_mlkit_document_scanner.dart';
import 'package:flutter/material.dart';
import 'package:image_picker/image_picker.dart';
import '../../core/constants/app_constants.dart';
import '../../core/theme/app_theme.dart';
import '../../data/models/target_type.dart';
import '../crop/crop_screen.dart';
import 'widgets/target_type_selector.dart';
import 'widgets/image_source_button.dart';
class CaptureScreen extends StatefulWidget {
@@ -31,24 +31,12 @@ class _CaptureScreenState extends State<CaptureScreen> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('Nouvelle Analyse'),
),
appBar: AppBar(title: const Text('Nouvelle Analyse')),
body: SingleChildScrollView(
padding: const EdgeInsets.all(AppConstants.defaultPadding),
child: Column(
crossAxisAlignment: CrossAxisAlignment.stretch,
children: [
// TODO: une fois la cible de silhouette mise en place, rajouter le selecteur
// Target type selection
// _buildSectionTitle('Type de Cible'),
// const SizedBox(height: 12),
// TargetTypeSelector(
// selectedType: _selectedType,
// onTypeSelected: (type) {
// setState(() => _selectedType = type);
// },
// ),
const SizedBox(height: AppConstants.largePadding),
// Image source selection
@@ -59,8 +47,8 @@ class _CaptureScreenState extends State<CaptureScreen> {
Expanded(
child: ImageSourceButton(
icon: Icons.camera_alt,
label: 'Camera',
onPressed: _isLoading ? null : () => _captureImage(ImageSource.camera),
label: 'Scanner',
onPressed: _isLoading ? null : _scanDocument,
),
),
const SizedBox(width: 12),
@@ -68,7 +56,9 @@ class _CaptureScreenState extends State<CaptureScreen> {
child: ImageSourceButton(
icon: Icons.photo_library,
label: 'Galerie',
onPressed: _isLoading ? null : () => _captureImage(ImageSource.gallery),
onPressed: _isLoading
? null
: () => _captureImage(ImageSource.gallery),
),
),
],
@@ -87,16 +77,15 @@ class _CaptureScreenState extends State<CaptureScreen> {
_buildImagePreview(),
// Guide text
if (_selectedImagePath == null && !_isLoading)
_buildGuide(),
if (_selectedImagePath == null && !_isLoading) _buildGuide(),
],
),
),
floatingActionButton: _selectedImagePath != null
? FloatingActionButton.extended(
onPressed: _analyzeImage,
icon: const Icon(Icons.analytics),
label: const Text('Analyser'),
icon: const Icon(Icons.arrow_forward),
label: const Text('Suivant'),
)
: null,
);
@@ -105,9 +94,9 @@ class _CaptureScreenState extends State<CaptureScreen> {
Widget _buildSectionTitle(String title) {
return Text(
title,
style: Theme.of(context).textTheme.titleMedium?.copyWith(
fontWeight: FontWeight.bold,
),
style: Theme.of(
context,
).textTheme.titleMedium?.copyWith(fontWeight: FontWeight.bold),
);
}
@@ -160,7 +149,9 @@ class _CaptureScreenState extends State<CaptureScreen> {
Expanded(
child: Text(
'Assurez-vous que la cible est bien centree et visible.',
style: TextStyle(color: AppTheme.warningColor.withValues(alpha: 0.8)),
style: TextStyle(
color: AppTheme.warningColor.withValues(alpha: 0.8),
),
),
),
],
@@ -175,20 +166,19 @@ class _CaptureScreenState extends State<CaptureScreen> {
padding: const EdgeInsets.all(AppConstants.defaultPadding),
child: Column(
children: [
Icon(
Icons.help_outline,
size: 48,
color: Colors.grey[400],
),
Icon(Icons.help_outline, size: 48, color: Colors.grey[400]),
const SizedBox(height: 12),
Text(
'Conseils pour une bonne analyse',
style: Theme.of(context).textTheme.titleSmall?.copyWith(
fontWeight: FontWeight.bold,
),
style: Theme.of(
context,
).textTheme.titleSmall?.copyWith(fontWeight: FontWeight.bold),
),
const SizedBox(height: 12),
_buildGuideItem(Icons.crop_free, 'Cadrez la cible entiere dans l\'image'),
_buildGuideItem(
Icons.crop_free,
'Cadrez la cible entiere dans l\'image',
),
_buildGuideItem(Icons.wb_sunny, 'Utilisez un bon eclairage'),
_buildGuideItem(Icons.straighten, 'Prenez la photo de face'),
_buildGuideItem(Icons.blur_off, 'Evitez les images floues'),
@@ -211,6 +201,39 @@ class _CaptureScreenState extends State<CaptureScreen> {
);
}
Future<void> _scanDocument() async {
setState(() => _isLoading = true);
try {
final options = DocumentScannerOptions(
documentFormat: DocumentFormat.jpeg,
mode: ScannerMode.base,
pageLimit: 1,
isGalleryImport: false,
);
final scanner = DocumentScanner(options: options);
final documents = await scanner.scanDocument();
if (documents.images.isNotEmpty) {
setState(() => _selectedImagePath = documents.images.first);
}
} catch (e) {
if (mounted) {
ScaffoldMessenger.of(context).showSnackBar(
SnackBar(
content: Text('Erreur lors du scan: $e'),
backgroundColor: AppTheme.errorColor,
),
);
}
} finally {
if (mounted) {
setState(() => _isLoading = false);
}
}
}
Future<void> _captureImage(ImageSource source) async {
setState(() => _isLoading = true);

View File

@@ -119,7 +119,8 @@ class _CropScreenState extends State<CropScreen> {
_viewportSize = Size(constraints.maxWidth, constraints.maxHeight);
// Taille du carré de crop (90% de la plus petite dimension)
_cropSize = math.min(constraints.maxWidth, constraints.maxHeight) * 0.85;
_cropSize =
math.min(constraints.maxWidth, constraints.maxHeight) * 0.85;
// Calculer l'échelle initiale si pas encore fait
if (_scale == 1.0 && _offset == Offset.zero) {
@@ -138,7 +139,7 @@ class _CropScreenState extends State<CropScreen> {
child: Transform(
transform: Matrix4.identity()
..setTranslationRaw(_offset.dx, _offset.dy, 0)
..scale(_scale, _scale, 1.0),
..scale(_scale, _scale),
alignment: Alignment.center,
child: Image.file(
File(widget.imagePath),
@@ -153,10 +154,7 @@ class _CropScreenState extends State<CropScreen> {
// Overlay de recadrage
Positioned.fill(
child: IgnorePointer(
child: CropOverlay(
cropSize: _cropSize,
showGrid: true,
),
child: CropOverlay(cropSize: _cropSize, showGrid: true),
),
),

View File

@@ -13,20 +13,13 @@ class CropOverlay extends StatelessWidget {
/// Afficher la grille des tiers
final bool showGrid;
const CropOverlay({
super.key,
required this.cropSize,
this.showGrid = true,
});
const CropOverlay({super.key, required this.cropSize, this.showGrid = true});
@override
Widget build(BuildContext context) {
return CustomPaint(
size: Size.infinite,
painter: _CropOverlayPainter(
cropSize: cropSize,
showGrid: showGrid,
),
painter: _CropOverlayPainter(cropSize: cropSize, showGrid: showGrid),
);
}
}
@@ -35,10 +28,7 @@ class _CropOverlayPainter extends CustomPainter {
final double cropSize;
final bool showGrid;
_CropOverlayPainter({
required this.cropSize,
required this.showGrid,
});
_CropOverlayPainter({required this.cropSize, required this.showGrid});
@override
void paint(Canvas canvas, Size size) {
@@ -77,6 +67,9 @@ class _CropOverlayPainter extends CustomPainter {
if (showGrid) {
_drawGrid(canvas, cropRect);
}
// Dessiner le point central (croix)
_drawCenterPoint(canvas, cropRect);
}
void _drawCorners(Canvas canvas, Rect rect) {
@@ -171,6 +164,38 @@ class _CropOverlayPainter extends CustomPainter {
);
}
void _drawCenterPoint(Canvas canvas, Rect rect) {
final centerPaint = Paint()
..color = Colors.white.withValues(alpha: 0.8)
..style = PaintingStyle.stroke
..strokeWidth = 2;
const size = 10.0;
final centerX = rect.center.dx;
final centerY = rect.center.dy;
// Ligne horizontale
canvas.drawLine(
Offset(centerX - size, centerY),
Offset(centerX + size, centerY),
centerPaint,
);
// Ligne verticale
canvas.drawLine(
Offset(centerX, centerY - size),
Offset(centerX, centerY + size),
centerPaint,
);
// Petit cercle central pour précision (optionnel, mais aide à viser)
canvas.drawCircle(
rect.center,
2,
Paint()..color = Colors.red.withValues(alpha: 0.6),
);
}
@override
bool shouldRepaint(covariant _CropOverlayPainter oldDelegate) {
return cropSize != oldDelegate.cropSize || showGrid != oldDelegate.showGrid;

View File

@@ -5,7 +5,6 @@
/// écart-type et distribution régionale des tirs.
library;
import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:provider/provider.dart';
import '../../core/constants/app_constants.dart';
@@ -69,28 +68,38 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
}
void _calculateStats() {
debugPrint('Calculating stats for ${_allSessions.length} sessions, period: $_selectedPeriod');
debugPrint(
'Calculating stats for ${_allSessions.length} sessions, period: $_selectedPeriod',
);
for (final session in _allSessions) {
debugPrint(' Session: ${session.id}, shots: ${session.shots.length}, date: ${session.createdAt}');
debugPrint(
' Session: ${session.id}, shots: ${session.shots.length}, date: ${session.createdAt}',
);
}
_statistics = _statisticsService.calculateStatistics(
_allSessions,
period: _selectedPeriod,
);
debugPrint('Statistics result: totalShots=${_statistics?.totalShots}, totalScore=${_statistics?.totalScore}');
debugPrint(
'Statistics result: totalShots=${_statistics?.totalShots}, totalScore=${_statistics?.totalScore}',
);
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.singleSession != null ? 'Statistiques Session' : 'Statistiques'),
title: Text(
widget.singleSession != null
? 'Statistiques Session'
: 'Statistiques',
),
),
body: _isLoading
? const Center(child: CircularProgressIndicator())
: _statistics == null || _statistics!.totalShots == 0
? _buildEmptyState()
: _buildStatistics(),
? _buildEmptyState()
: _buildStatistics(),
);
}
@@ -101,7 +110,11 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Icon(Icons.analytics_outlined, size: 64, color: Colors.grey.shade400),
Icon(
Icons.analytics_outlined,
size: 64,
color: Colors.grey.shade400,
),
const SizedBox(height: 16),
Text(
'Aucune donnee disponible',
@@ -292,11 +305,17 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
children: [
Padding(
padding: const EdgeInsets.only(left: 16),
child: Text('Peu', style: TextStyle(fontSize: 12, color: Colors.grey.shade600)),
child: Text(
'Peu',
style: TextStyle(fontSize: 12, color: Colors.grey.shade600),
),
),
Padding(
padding: const EdgeInsets.only(right: 16),
child: Text('Beaucoup', style: TextStyle(fontSize: 12, color: Colors.grey.shade600)),
child: Text(
'Beaucoup',
style: TextStyle(fontSize: 12, color: Colors.grey.shade600),
),
),
],
),
@@ -306,28 +325,6 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
);
}
Widget _buildLegendItem(Color color, String label) {
return Padding(
padding: const EdgeInsets.symmetric(horizontal: 4),
child: Row(
mainAxisSize: MainAxisSize.min,
children: [
Container(
width: 16,
height: 16,
decoration: BoxDecoration(
color: color,
borderRadius: BorderRadius.circular(2),
border: Border.all(color: Colors.grey.shade400),
),
),
const SizedBox(width: 4),
Text(label, style: const TextStyle(fontSize: 10)),
],
),
);
}
Widget _buildPrecisionSection() {
final precision = _statistics!.precision;
@@ -339,7 +336,10 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
children: [
Row(
children: [
const Icon(Icons.center_focus_strong, color: AppTheme.successColor),
const Icon(
Icons.center_focus_strong,
color: AppTheme.successColor,
),
const SizedBox(width: 8),
const Text(
'Precision',
@@ -368,12 +368,18 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
],
),
const Divider(height: 32),
_buildStatRow('Distance moyenne du centre',
'${(precision.avgDistanceFromCenter * 100).toStringAsFixed(1)}%'),
_buildStatRow('Diametre de groupement',
'${(precision.groupingDiameter * 100).toStringAsFixed(1)}%'),
_buildStatRow('Score moyen',
_statistics!.avgScore.toStringAsFixed(2)),
_buildStatRow(
'Distance moyenne du centre',
'${(precision.avgDistanceFromCenter * 100).toStringAsFixed(1)}%',
),
_buildStatRow(
'Diametre de groupement',
'${(precision.groupingDiameter * 100).toStringAsFixed(1)}%',
),
_buildStatRow(
'Score moyen',
_statistics!.avgScore.toStringAsFixed(2),
),
_buildStatRow('Meilleur score', '${_statistics!.maxScore}'),
_buildStatRow('Plus bas score', '${_statistics!.minScore}'),
],
@@ -386,8 +392,8 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
final color = value > 70
? AppTheme.successColor
: value > 40
? AppTheme.warningColor
: AppTheme.errorColor;
? AppTheme.warningColor
: AppTheme.errorColor;
return Column(
children: [
@@ -405,7 +411,7 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
),
),
Text(
'${value.toStringAsFixed(0)}',
value.toStringAsFixed(0),
style: TextStyle(
fontSize: 20,
fontWeight: FontWeight.bold,
@@ -415,10 +421,7 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
],
),
const SizedBox(height: 8),
Text(
title,
style: const TextStyle(fontWeight: FontWeight.bold),
),
Text(title, style: const TextStyle(fontWeight: FontWeight.bold)),
Text(
subtitle,
style: TextStyle(fontSize: 10, color: Colors.grey.shade600),
@@ -439,7 +442,10 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
children: [
Row(
children: [
const Icon(Icons.stacked_line_chart, color: AppTheme.warningColor),
const Icon(
Icons.stacked_line_chart,
color: AppTheme.warningColor,
),
const SizedBox(width: 8),
const Text(
'Ecart Type',
@@ -453,21 +459,32 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
style: TextStyle(color: Colors.grey.shade600, fontSize: 12),
),
const SizedBox(height: 16),
_buildStatRow('Ecart type X (horizontal)',
'${(stdDev.stdDevX * 100).toStringAsFixed(2)}%'),
_buildStatRow('Ecart type Y (vertical)',
'${(stdDev.stdDevY * 100).toStringAsFixed(2)}%'),
_buildStatRow('Ecart type radial',
'${(stdDev.stdDevRadial * 100).toStringAsFixed(2)}%'),
_buildStatRow('Ecart type score',
stdDev.stdDevScore.toStringAsFixed(2)),
_buildStatRow(
'Ecart type X (horizontal)',
'${(stdDev.stdDevX * 100).toStringAsFixed(2)}%',
),
_buildStatRow(
'Ecart type Y (vertical)',
'${(stdDev.stdDevY * 100).toStringAsFixed(2)}%',
),
_buildStatRow(
'Ecart type radial',
'${(stdDev.stdDevRadial * 100).toStringAsFixed(2)}%',
),
_buildStatRow(
'Ecart type score',
stdDev.stdDevScore.toStringAsFixed(2),
),
const Divider(height: 24),
_buildStatRow('Position moyenne X',
'${(stdDev.meanX * 100).toStringAsFixed(1)}%'),
_buildStatRow('Position moyenne Y',
'${(stdDev.meanY * 100).toStringAsFixed(1)}%'),
_buildStatRow('Score moyen',
stdDev.meanScore.toStringAsFixed(2)),
_buildStatRow(
'Position moyenne X',
'${(stdDev.meanX * 100).toStringAsFixed(1)}%',
),
_buildStatRow(
'Position moyenne Y',
'${(stdDev.meanY * 100).toStringAsFixed(1)}%',
),
_buildStatRow('Score moyen', stdDev.meanScore.toStringAsFixed(2)),
],
),
),
@@ -504,7 +521,10 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
),
child: Row(
children: [
const Icon(Icons.compass_calibration, color: AppTheme.primaryColor),
const Icon(
Icons.compass_calibration,
color: AppTheme.primaryColor,
),
const SizedBox(width: 12),
Expanded(
child: Column(
@@ -536,7 +556,10 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
),
child: Row(
children: [
const Icon(Icons.warning_amber, color: AppTheme.warningColor),
const Icon(
Icons.warning_amber,
color: AppTheme.warningColor,
),
const SizedBox(width: 12),
Expanded(
child: Column(
@@ -556,7 +579,10 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
const SizedBox(height: 16),
// Sector distribution
const Text('Repartition par secteur:', style: TextStyle(fontWeight: FontWeight.bold)),
const Text(
'Repartition par secteur:',
style: TextStyle(fontWeight: FontWeight.bold),
),
const SizedBox(height: 8),
Wrap(
spacing: 8,
@@ -572,7 +598,10 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
const SizedBox(height: 16),
// Quadrant distribution
const Text('Repartition par quadrant:', style: TextStyle(fontWeight: FontWeight.bold)),
const Text(
'Repartition par quadrant:',
style: TextStyle(fontWeight: FontWeight.bold),
),
const SizedBox(height: 8),
_buildQuadrantGrid(regional.quadrantDistribution),
],
@@ -598,7 +627,9 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
return Container(
padding: const EdgeInsets.symmetric(horizontal: 12, vertical: 6),
decoration: BoxDecoration(
color: count > 0 ? AppTheme.primaryColor.withValues(alpha: 0.1) : Colors.grey.shade100,
color: count > 0
? AppTheme.primaryColor.withValues(alpha: 0.1)
: Colors.grey.shade100,
borderRadius: BorderRadius.circular(16),
border: Border.all(
color: count > 0 ? AppTheme.primaryColor : Colors.grey.shade300,
@@ -649,10 +680,7 @@ class _StatisticsScreenState extends State<StatisticsScreen> {
children: [
Text(
'$count',
style: const TextStyle(
fontWeight: FontWeight.bold,
fontSize: 24,
),
style: const TextStyle(fontWeight: FontWeight.bold, fontSize: 24),
),
Text(
'${percentage.toStringAsFixed(0)}%',
@@ -712,10 +740,7 @@ class _StatCard extends StatelessWidget {
color: color,
),
),
Text(
title,
style: TextStyle(color: Colors.grey.shade600),
),
Text(title, style: TextStyle(color: Colors.grey.shade600)),
],
),
),

View File

@@ -10,6 +10,7 @@ import 'services/target_detection_service.dart';
import 'services/score_calculator_service.dart';
import 'services/grouping_analyzer_service.dart';
import 'services/image_processing_service.dart';
import 'services/yolo_impact_detection_service.dart';
void main() async {
WidgetsFlutterBinding.ensureInitialized();
@@ -33,9 +34,13 @@ void main() async {
Provider<ImageProcessingService>(
create: (_) => ImageProcessingService(),
),
Provider<YOLOImpactDetectionService>(
create: (_) => YOLOImpactDetectionService(),
),
Provider<TargetDetectionService>(
create: (context) => TargetDetectionService(
imageProcessingService: context.read<ImageProcessingService>(),
yoloService: context.read<YOLOImpactDetectionService>(),
),
),
Provider<ScoreCalculatorService>(
@@ -44,9 +49,7 @@ void main() async {
Provider<GroupingAnalyzerService>(
create: (_) => GroupingAnalyzerService(),
),
Provider<SessionRepository>(
create: (_) => SessionRepository(),
),
Provider<SessionRepository>(create: (_) => SessionRepository()),
],
child: const BullyApp(),
),

View File

@@ -8,6 +8,7 @@ library;
import 'dart:io';
import 'dart:math' as math;
import 'package:image/image.dart' as img;
import 'package:opencv_dart/opencv_dart.dart' as cv;
import 'package:path_provider/path_provider.dart';
/// Paramètres de distorsion calculés à partir de la calibration
@@ -281,16 +282,56 @@ class DistortionCorrectionService {
final p11 = image.getPixel(x1, y1);
// Interpoler chaque canal
final r = _lerp2D(p00.r.toDouble(), p10.r.toDouble(), p01.r.toDouble(), p11.r.toDouble(), wx, wy);
final g = _lerp2D(p00.g.toDouble(), p10.g.toDouble(), p01.g.toDouble(), p11.g.toDouble(), wx, wy);
final b = _lerp2D(p00.b.toDouble(), p10.b.toDouble(), p01.b.toDouble(), p11.b.toDouble(), wx, wy);
final a = _lerp2D(p00.a.toDouble(), p10.a.toDouble(), p01.a.toDouble(), p11.a.toDouble(), wx, wy);
final r = _lerp2D(
p00.r.toDouble(),
p10.r.toDouble(),
p01.r.toDouble(),
p11.r.toDouble(),
wx,
wy,
);
final g = _lerp2D(
p00.g.toDouble(),
p10.g.toDouble(),
p01.g.toDouble(),
p11.g.toDouble(),
wx,
wy,
);
final b = _lerp2D(
p00.b.toDouble(),
p10.b.toDouble(),
p01.b.toDouble(),
p11.b.toDouble(),
wx,
wy,
);
final a = _lerp2D(
p00.a.toDouble(),
p10.a.toDouble(),
p01.a.toDouble(),
p11.a.toDouble(),
wx,
wy,
);
return img.ColorRgba8(r.round().clamp(0, 255), g.round().clamp(0, 255), b.round().clamp(0, 255), a.round().clamp(0, 255));
return img.ColorRgba8(
r.round().clamp(0, 255),
g.round().clamp(0, 255),
b.round().clamp(0, 255),
a.round().clamp(0, 255),
);
}
/// Interpolation linéaire 2D
double _lerp2D(double v00, double v10, double v01, double v11, double wx, double wy) {
double _lerp2D(
double v00,
double v10,
double v01,
double v11,
double wx,
double wy,
) {
final top = v00 * (1 - wx) + v10 * wx;
final bottom = v01 * (1 - wx) + v11 * wx;
return top * (1 - wy) + bottom * wy;
@@ -320,7 +361,9 @@ class DistortionCorrectionService {
final height = image.height;
// Convertir les coordonnées normalisées en pixels
final srcCorners = corners.map((c) => (x: c.x * width, y: c.y * height)).toList();
final srcCorners = corners
.map((c) => (x: c.x * width, y: c.y * height))
.toList();
// Calculer la taille du rectangle destination
// On prend la moyenne des largeurs et hauteurs
@@ -336,20 +379,21 @@ class DistortionCorrectionService {
final result = img.Image(width: dstWidth, height: dstHeight);
// Calculer la matrice de transformation perspective
final matrix = _computePerspectiveMatrix(
srcCorners,
[
(x: 0.0, y: 0.0),
(x: dstWidth.toDouble(), y: 0.0),
(x: dstWidth.toDouble(), y: dstHeight.toDouble()),
(x: 0.0, y: dstHeight.toDouble()),
],
);
final matrix = _computePerspectiveMatrix(srcCorners, [
(x: 0.0, y: 0.0),
(x: dstWidth.toDouble(), y: 0.0),
(x: dstWidth.toDouble(), y: dstHeight.toDouble()),
(x: 0.0, y: dstHeight.toDouble()),
]);
// Appliquer la transformation
for (int y = 0; y < dstHeight; y++) {
for (int x = 0; x < dstWidth; x++) {
final src = _applyPerspectiveTransform(matrix, x.toDouble(), y.toDouble());
final src = _applyPerspectiveTransform(
matrix,
x.toDouble(),
y.toDouble(),
);
if (src.x >= 0 && src.x < width && src.y >= 0 && src.y < height) {
final pixel = _bilinearInterpolate(image, src.x, src.y);
@@ -408,8 +452,11 @@ class DistortionCorrectionService {
// Le système 'a' est de taille 8x9 (8 équations, 9 inconnues).
// On fixe h8 = 1.0 pour résoudre le système, ce qui nous donne un système 8x8.
final int n = 8;
final List<List<double>> matrix = List.generate(n, (i) => List<double>.from(a[i]));
final List<List<double>> matrix = List.generate(
n,
(i) => List<double>.from(a[i]),
);
// Vecteur B (les constantes de l'autre côté de l'égalité)
// Dans DLT, -h8 * dx (ou dy) devient le terme constant.
final List<double> b = List.generate(n, (i) => -matrix[i][8]);
@@ -428,7 +475,7 @@ class DistortionCorrectionService {
final List<double> tempRow = matrix[i];
matrix[i] = matrix[pivot];
matrix[pivot] = tempRow;
final double tempB = b[i];
b[i] = b[pivot];
b[pivot] = tempB;
@@ -462,7 +509,11 @@ class DistortionCorrectionService {
return h;
}
({double x, double y}) _applyPerspectiveTransform(List<double> h, double x, double y) {
({double x, double y}) _applyPerspectiveTransform(
List<double> h,
double x,
double y,
) {
final w = h[6] * x + h[7] * y + h[8];
if (w.abs() < 1e-10) {
return (x: x, y: y);
@@ -471,4 +522,553 @@ class DistortionCorrectionService {
final ny = (h[3] * x + h[4] * y + h[5]) / w;
return (x: nx, y: ny);
}
/// Corrige la perspective en se basant sur la détection de cercles (ellipses)
/// dans l'image.
///
/// Cette méthode tente de détecter l'ellipse la plus proéminente (la cible)
/// et calcule une transformation pour la rendre parfaitement circulaire.
Future<String> correctPerspectiveUsingCircles(String imagePath) async {
try {
// 1. Charger l'image avec OpenCV
final src = cv.imread(imagePath, flags: cv.IMREAD_COLOR);
if (src.isEmpty) throw Exception("Impossible de charger l'image");
// 2. Prétraitement
final gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY);
final blurred = cv.gaussianBlur(gray, (5, 5), 0);
// Canny edge detector avec seuil adaptatif (Otsu)
final thresh = cv.threshold(
blurred,
0,
255,
cv.THRESH_BINARY | cv.THRESH_OTSU,
);
final edges = cv.canny(blurred, thresh.$1 * 0.5, thresh.$1);
// 3. Trouver les contours
final contoursResult = cv.findContours(
edges,
cv.RETR_EXTERNAL,
cv.CHAIN_APPROX_SIMPLE,
);
final contours = contoursResult.$1;
if (contours.isEmpty) return imagePath; // Pas de contours trouvés
// 4. Trouver le meilleur candidat ellipse
cv.RotatedRect? bestEllipse;
double maxArea = 0;
for (final contour in contours) {
if (contour.length < 5)
continue; // fitEllipse nécessite au moins 5 points
final area = cv.contourArea(contour);
if (area < 1000) continue; // Ignorer les trop petits bruits
final ellipse = cv.fitEllipse(contour);
// Critère de sélection: on cherche la plus grande ellipse qui est proche d'un cercle
// Mais comme on veut corriger la distorsion, elle PEUT être aplatie.
// Donc on prend juste la plus grande ellipse raisonnablement centrée.
if (area > maxArea) {
maxArea = area;
bestEllipse = ellipse;
}
}
if (bestEllipse == null) return imagePath;
// 5. Calculer la transformation perspective
// L'idée est de mapper les 4 sommets de l'ellipse détectée vers un cercle parfait.
// Ou plus simplement, mapper le rectangle englobant de l'ellipse vers un carré.
// Points source: les 4 coins du rotated rect de l'ellipse
// Note: opencv_dart RotatedRect points() non dispo directement?
// On peut utiliser boxPoints(ellipse)
final boxPoints = cv.boxPoints(bestEllipse);
// boxPoints returns Mat (4x2 float32)
// Extraire les 4 points
final List<cv.Point> srcPoints = [];
for (int i = 0; i < boxPoints.length; i++) {
// On accède directement au point à l'index i
final point2f = boxPoints[i];
// On convertit les coordonnées float en int pour cv.Point
srcPoints.add(cv.Point(point2f.x.toInt(), point2f.y.toInt()));
}
// Trier les points pour avoir: TL, TR, BR, BL
_sortPoints(srcPoints);
// Dimensions cibles
final side = math
.max(bestEllipse.size.width, bestEllipse.size.height)
.toInt();
final List<cv.Point> dstPoints = [
cv.Point(0, 0),
cv.Point(side, 0),
cv.Point(side, side),
cv.Point(0, side),
];
// Matrice de perspective
final M = cv.getPerspectiveTransform(
cv.VecPoint.fromList(srcPoints),
cv.VecPoint.fromList(dstPoints),
);
// 6. Warper l'image
final corrected = cv.warpPerspective(src, M, (side, side));
// 7. Sauvegarder
final tempDir = await getTemporaryDirectory();
final timestamp = DateTime.now().millisecondsSinceEpoch;
final outputPath = '${tempDir.path}/corrected_circle_$timestamp.jpg';
cv.imwrite(outputPath, corrected);
return outputPath;
} catch (e) {
// En cas d'erreur, retourner l'image originale
print('Erreur correction perspective cercles: $e');
return imagePath;
}
}
/// Trie les points dans l'ordre: Top-Left, Top-Right, Bottom-Right, Bottom-Left
void _sortPoints(List<cv.Point> points) {
// Calculer le centre de gravité
double cx = 0;
double cy = 0;
for (final p in points) {
cx += p.x;
cy += p.y;
}
cx /= points.length;
cy /= points.length;
points.sort((a, b) {
// Trier par angle autour du centre
final angleA = math.atan2(a.y - cy, a.x - cx);
final angleB = math.atan2(b.y - cy, b.x - cx);
return angleA.compareTo(angleB);
});
// Re-trier pour être sûr:
points.sort((a, b) => (a.y + a.x).compareTo(b.y + b.x));
final tl = points[0];
final br = points[3];
// Reste tr et bl
final remaining = [points[1], points[2]];
remaining.sort((a, b) => a.x.compareTo(b.x));
final bl = remaining[0];
final tr = remaining[1];
points[0] = tl;
points[1] = tr;
points[2] = br;
points[3] = bl;
}
/// Corrige la perspective en reformant le plus grand ovale (ellipse) en un cercle parfait,
/// sans recadrer agressivement l'image entière.
Future<String> correctPerspectiveUsingOvals(String imagePath) async {
try {
final src = cv.imread(imagePath, flags: cv.IMREAD_COLOR);
if (src.isEmpty) throw Exception("Impossible de charger l'image");
final gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY);
final blurred = cv.gaussianBlur(gray, (5, 5), 0);
final thresh = cv.threshold(
blurred,
0,
255,
cv.THRESH_BINARY | cv.THRESH_OTSU,
);
final edges = cv.canny(blurred, thresh.$1 * 0.5, thresh.$1);
final contoursResult = cv.findContours(
edges,
cv.RETR_EXTERNAL,
cv.CHAIN_APPROX_SIMPLE,
);
final contours = contoursResult.$1;
if (contours.isEmpty) return imagePath;
cv.RotatedRect? bestEllipse;
double maxArea = 0;
for (final contour in contours) {
if (contour.length < 5) continue;
final area = cv.contourArea(contour);
if (area < 1000) continue;
final ellipse = cv.fitEllipse(contour);
if (area > maxArea) {
maxArea = area;
bestEllipse = ellipse;
}
}
if (bestEllipse == null) return imagePath;
// The goal here is to morph the bestEllipse into a perfect circle, while
// keeping the image the same size and the center of the ellipse in the same place.
// We'll use the average of the width and height (or max) to define the target circle
final targetRadius =
math.max(bestEllipse.size.width, bestEllipse.size.height) / 2.0;
// Extract the 4 bounding box points of the ellipse
final boxPoints = cv.boxPoints(bestEllipse);
final List<cv.Point> srcPoints = [];
for (int i = 0; i < boxPoints.length; i++) {
srcPoints.add(cv.Point(boxPoints[i].x.toInt(), boxPoints[i].y.toInt()));
}
_sortPoints(srcPoints);
// Calculate the size of the perfectly squared output image
final int side = (targetRadius * 2).toInt();
final List<cv.Point> dstPoints = [
cv.Point(0, 0), // Top-Left
cv.Point(side, 0), // Top-Right
cv.Point(side, side), // Bottom-Right
cv.Point(0, side), // Bottom-Left
];
// Morph the target region into a perfect square, cropping the rest of the image
final M = cv.getPerspectiveTransform(
cv.VecPoint.fromList(srcPoints),
cv.VecPoint.fromList(dstPoints),
);
final corrected = cv.warpPerspective(src, M, (side, side));
final tempDir = await getTemporaryDirectory();
final timestamp = DateTime.now().millisecondsSinceEpoch;
final outputPath = '${tempDir.path}/corrected_oval_$timestamp.jpg';
cv.imwrite(outputPath, corrected);
return outputPath;
} catch (e) {
print('Erreur correction perspective ovales: $e');
return imagePath;
}
}
/// Corrige la distorsion et la profondeur (perspective) en créant un maillage
/// basé sur la concentricité des différents cercles de la cible pour trouver le meilleur plan.
Future<String> correctPerspectiveWithConcentricMesh(String imagePath) async {
try {
final src = cv.imread(imagePath, flags: cv.IMREAD_COLOR);
if (src.isEmpty) throw Exception("Impossible de charger l'image");
final gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY);
final blurred = cv.gaussianBlur(gray, (5, 5), 0);
final thresh = cv.threshold(
blurred,
0,
255,
cv.THRESH_BINARY | cv.THRESH_OTSU,
);
final edges = cv.canny(blurred, thresh.$1 * 0.5, thresh.$1);
final contoursResult = cv.findContours(
edges,
cv.RETR_LIST,
cv.CHAIN_APPROX_SIMPLE,
);
final contours = contoursResult.$1;
if (contours.isEmpty) return imagePath;
List<cv.RotatedRect> ellipses = [];
for (final contour in contours) {
if (contour.length < 5) continue;
if (cv.contourArea(contour) < 500) continue;
ellipses.add(cv.fitEllipse(contour));
}
if (ellipses.isEmpty) return imagePath;
// Find the largest ellipse to serve as our central reference
ellipses.sort(
(a, b) => (b.size.width * b.size.height).compareTo(
a.size.width * a.size.height,
),
);
final largestEllipse = ellipses.first;
final maxDist =
math.max(largestEllipse.size.width, largestEllipse.size.height) *
0.15;
// Group all ellipses that are roughly concentric with the largest one
List<cv.RotatedRect> concentricGroup = [];
for (final e in ellipses) {
final dx = e.center.x - largestEllipse.center.x;
final dy = e.center.y - largestEllipse.center.y;
if (math.sqrt(dx * dx + dy * dy) < maxDist) {
concentricGroup.add(e);
}
}
if (concentricGroup.length < 2) {
print(
"Pas assez de cercles concentriques pour le maillage, utilisation de la méthode simple.",
);
return await correctPerspectiveUsingOvals(imagePath);
}
final targetRadius =
math.max(largestEllipse.size.width, largestEllipse.size.height) / 2.0;
final int side = (targetRadius * 2.4).toInt(); // Add padding
final double cx = side / 2.0;
final double cy = side / 2.0;
List<cv.Point2f> srcPointsList = [];
List<cv.Point2f> dstPointsList = [];
for (final ellipse in concentricGroup) {
final box = cv.boxPoints(ellipse);
final m0 = cv.Point2f(
(box[0].x + box[1].x) / 2,
(box[0].y + box[1].y) / 2,
);
final m1 = cv.Point2f(
(box[1].x + box[2].x) / 2,
(box[1].y + box[2].y) / 2,
);
final m2 = cv.Point2f(
(box[2].x + box[3].x) / 2,
(box[2].y + box[3].y) / 2,
);
final m3 = cv.Point2f(
(box[3].x + box[0].x) / 2,
(box[3].y + box[0].y) / 2,
);
final d02 = math.sqrt(
math.pow(m0.x - m2.x, 2) + math.pow(m0.y - m2.y, 2),
);
final d13 = math.sqrt(
math.pow(m1.x - m3.x, 2) + math.pow(m1.y - m3.y, 2),
);
cv.Point2f maj1, maj2, min1, min2;
double r;
if (d02 > d13) {
maj1 = m0;
maj2 = m2;
min1 = m1;
min2 = m3;
r = d02 / 2.0;
} else {
maj1 = m1;
maj2 = m3;
min1 = m0;
min2 = m2;
r = d13 / 2.0;
}
// Sort maj1 and maj2 so maj1 is left/top
if ((maj1.x - maj2.x).abs() > (maj1.y - maj2.y).abs()) {
if (maj1.x > maj2.x) {
final t = maj1;
maj1 = maj2;
maj2 = t;
}
} else {
if (maj1.y > maj2.y) {
final t = maj1;
maj1 = maj2;
maj2 = t;
}
}
// Sort min1 and min2 so min1 is top/left
if ((min1.y - min2.y).abs() > (min1.x - min2.x).abs()) {
if (min1.y > min2.y) {
final t = min1;
min1 = min2;
min2 = t;
}
} else {
if (min1.x > min2.x) {
final t = min1;
min1 = min2;
min2 = t;
}
}
srcPointsList.addAll([maj1, maj2, min1, min2]);
dstPointsList.addAll([
cv.Point2f(cx - r, cy),
cv.Point2f(cx + r, cy),
cv.Point2f(cx, cy - r),
cv.Point2f(cx, cy + r),
]);
// Add ellipse centers mapping perfectly to the origin to force concentric depth alignment
srcPointsList.add(cv.Point2f(ellipse.center.x, ellipse.center.y));
dstPointsList.add(cv.Point2f(cx, cy));
}
// We explicitly convert points to VecPoint to use findHomography standard binding
final srcVec = cv.VecPoint.fromList(
srcPointsList.map((p) => cv.Point(p.x.toInt(), p.y.toInt())).toList(),
);
final dstVec = cv.VecPoint.fromList(
dstPointsList.map((p) => cv.Point(p.x.toInt(), p.y.toInt())).toList(),
);
final M = cv.findHomography(
cv.Mat.fromVec(srcVec),
cv.Mat.fromVec(dstVec),
method: cv.RANSAC,
);
if (M.isEmpty) {
return await correctPerspectiveUsingOvals(imagePath);
}
final corrected = cv.warpPerspective(src, M, (side, side));
final tempDir = await getTemporaryDirectory();
final timestamp = DateTime.now().millisecondsSinceEpoch;
final outputPath = '${tempDir.path}/corrected_mesh_$timestamp.jpg';
cv.imwrite(outputPath, corrected);
return outputPath;
} catch (e) {
print('Erreur correction perspective maillage concentrique: $e');
return imagePath;
}
}
/// Corrige la perspective en détectant les 4 coins de la feuille (quadrilatère)
///
/// Cette méthode cherche le plus grand polygone à 4 côtés (le bord du papier)
/// et le déforme pour en faire un carré parfait.
Future<String> correctPerspectiveUsingQuadrilateral(String imagePath) async {
try {
final src = cv.imread(imagePath, flags: cv.IMREAD_COLOR);
if (src.isEmpty) throw Exception("Impossible de charger l'image");
final gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY);
// Flou plus important pour ignorer les détails internes (cercles, trous)
final blurred = cv.gaussianBlur(gray, (9, 9), 0);
// Canny edge detector
final thresh = cv.threshold(
blurred,
0,
255,
cv.THRESH_BINARY | cv.THRESH_OTSU,
);
final edges = cv.canny(blurred, thresh.$1 * 0.5, thresh.$1);
// Pour la détection de la feuille (les bords peuvent être discontinus à cause de l'éclairage)
final kernel = cv.getStructuringElement(cv.MORPH_RECT, (5, 5));
final closedEdges = cv.morphologyEx(edges, cv.MORPH_CLOSE, kernel);
// Find contours
final contoursResult = cv.findContours(
closedEdges,
cv.RETR_EXTERNAL,
cv.CHAIN_APPROX_SIMPLE,
);
final contours = contoursResult.$1;
cv.VecPoint? bestQuad;
double maxArea = 0;
final minArea = src.rows * src.cols * 0.1; // Au moins 10% de l'image
for (final contour in contours) {
final area = cv.contourArea(contour);
if (area < minArea) continue;
final peri = cv.arcLength(contour, true);
// Approximation polygonale (tolérance = 2% à 5% du périmètre)
final approx = cv.approxPolyDP(contour, 0.04 * peri, true);
if (approx.length == 4) {
if (area > maxArea) {
maxArea = area;
bestQuad = approx;
}
}
}
// Fallback
if (bestQuad == null) {
print(
"Aucun papier quadrilatère détecté, on utilise les cercles à la place.",
);
return await correctPerspectiveUsingCircles(imagePath);
}
// Convert to List<cv.Point>
final List<cv.Point> srcPoints = [];
for (int i = 0; i < bestQuad.length; i++) {
srcPoints.add(bestQuad[i]);
}
_sortPoints(srcPoints);
// Calculate max width and height
double widthA = _distanceCV(srcPoints[2], srcPoints[3]);
double widthB = _distanceCV(srcPoints[1], srcPoints[0]);
int dstWidth = math.max(widthA, widthB).toInt();
double heightA = _distanceCV(srcPoints[1], srcPoints[2]);
double heightB = _distanceCV(srcPoints[0], srcPoints[3]);
int dstHeight = math.max(heightA, heightB).toInt();
// Since standard target paper forms a square, we force the resulting warp to be a perfect square.
int side = math.max(dstWidth, dstHeight);
final List<cv.Point> dstPoints = [
cv.Point(0, 0),
cv.Point(side, 0),
cv.Point(side, side),
cv.Point(0, side),
];
final M = cv.getPerspectiveTransform(
cv.VecPoint.fromList(srcPoints),
cv.VecPoint.fromList(dstPoints),
);
final corrected = cv.warpPerspective(src, M, (side, side));
final tempDir = await getTemporaryDirectory();
final timestamp = DateTime.now().millisecondsSinceEpoch;
final outputPath = '${tempDir.path}/corrected_quad_$timestamp.jpg';
cv.imwrite(outputPath, corrected);
return outputPath;
} catch (e) {
print('Erreur correction perspective quadrilatère: $e');
// Fallback
return await correctPerspectiveUsingCircles(imagePath);
}
}
double _distanceCV(cv.Point p1, cv.Point p2) {
final dx = p2.x - p1.x;
final dy = p2.y - p1.y;
return math.sqrt(dx * dx + dy * dy);
}
}

View File

@@ -1,13 +1,8 @@
/// Service de détection d'impacts utilisant OpenCV.
///
/// NOTE: OpenCV est actuellement désactivé sur Windows en raison de problèmes
/// de compilation. Ce fichier contient des stubs qui permettent au code de
/// compiler sans OpenCV. Réactiver opencv_dart dans pubspec.yaml et
/// décommenter le code ci-dessous quand le support sera corrigé.
library;
// import 'dart:math' as math;
// import 'package:opencv_dart/opencv_dart.dart' as cv;
import 'dart:math' as math;
import 'package:opencv_dart/opencv_dart.dart' as cv;
/// Paramètres de détection d'impacts OpenCV
class OpenCVDetectionSettings {
@@ -90,30 +85,144 @@ class OpenCVDetectedImpact {
}
/// Service de détection d'impacts utilisant OpenCV
///
/// NOTE: Actuellement désactivé - retourne des listes vides.
/// OpenCV n'est pas disponible sur Windows pour le moment.
class OpenCVImpactDetectionService {
/// Détecte les impacts dans une image en utilisant OpenCV
///
/// STUB: Retourne une liste vide car OpenCV est désactivé.
List<OpenCVDetectedImpact> detectImpacts(
String imagePath, {
OpenCVDetectionSettings settings = const OpenCVDetectionSettings(),
}) {
print('OpenCV est désactivé - utilisation de la détection classique recommandée');
return [];
try {
final img = cv.imread(imagePath, flags: cv.IMREAD_COLOR);
if (img.isEmpty) return [];
final gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY);
// Apply blur to reduce noise
final blurKSize = (settings.blurSize, settings.blurSize);
final blurred = cv.gaussianBlur(gray, blurKSize, 2, sigmaY: 2);
final List<OpenCVDetectedImpact> detectedImpacts = [];
final circles = cv.HoughCircles(
blurred,
cv.HOUGH_GRADIENT,
1,
settings.minDist,
param1: settings.param1,
param2: settings.param2,
minRadius: settings.minRadius,
maxRadius: settings.maxRadius,
);
if (circles.rows > 0 && circles.cols > 0) {
// Mat shape: (1, N, 3) usually for HoughCircles (CV_32FC3)
// We use at<Vec3f> directly.
for (int i = 0; i < circles.cols; i++) {
final vec = circles.at<cv.Vec3f>(0, i);
final x = vec.val1;
final y = vec.val2;
final r = vec.val3;
detectedImpacts.add(
OpenCVDetectedImpact(
x: x / img.cols,
y: y / img.rows,
radius: r,
confidence: 0.8,
method: 'hough',
),
);
}
}
// 2. Contour Detection (if enabled)
if (settings.useContourDetection) {
// Canny edge detection
final edges = cv.canny(
blurred,
settings.cannyThreshold1,
settings.cannyThreshold2,
);
// Find contours
final contoursResult = cv.findContours(
edges,
cv.RETR_EXTERNAL,
cv.CHAIN_APPROX_SIMPLE,
);
final contours = contoursResult.$1;
// hierarchy is $2
for (int i = 0; i < contours.length; i++) {
final contour = contours[i];
// Filter by area
final area = cv.contourArea(contour);
if (area < settings.minContourArea ||
area > settings.maxContourArea) {
continue;
}
// Filter by circularity
final perimeter = cv.arcLength(contour, true);
if (perimeter == 0) continue;
final circularity = 4 * math.pi * area / (perimeter * perimeter);
if (circularity < settings.minCircularity) continue;
// Get bounding circle
final enclosingCircle = cv.minEnclosingCircle(contour);
final center = enclosingCircle.$1;
final radius = enclosingCircle.$2;
// Avoid duplicates (simple distance check against Hough results)
bool isDuplicate = false;
for (final existing in detectedImpacts) {
final dx = existing.x * img.cols - center.x;
final dy = existing.y * img.rows - center.y;
final dist = math.sqrt(dx * dx + dy * dy);
if (dist < radius) {
isDuplicate = true;
break;
}
}
if (!isDuplicate) {
detectedImpacts.add(
OpenCVDetectedImpact(
x: center.x / img.cols,
y: center.y / img.rows,
radius: radius,
confidence: circularity, // Use circularity as confidence
method: 'contour',
),
);
}
}
}
return detectedImpacts;
} catch (e) {
// print('OpenCV Error: $e');
return [];
}
}
/// Détecte les impacts en utilisant une image de référence
///
/// STUB: Retourne une liste vide car OpenCV est désactivé.
List<OpenCVDetectedImpact> detectFromReferences(
String imagePath,
List<({double x, double y})> referencePoints, {
double tolerance = 2.0,
}) {
print('OpenCV est désactivé - utilisation de la détection par références classique recommandée');
return [];
// Basic implementation: use average color/brightness of reference points
// This is a placeholder for a more complex template matching or feature matching
// For now, we can just run the standard detection but filter results
// based on properties of the reference points (e.g. size/radius if we had it).
// Returning standard detection for now to enable the feature.
return detectImpacts(imagePath);
}
}

View File

@@ -0,0 +1,240 @@
import 'dart:math' as math;
import 'package:opencv_dart/opencv_dart.dart' as cv;
class TargetDetectionResult {
final double centerX;
final double centerY;
final double radius;
final bool success;
TargetDetectionResult({
required this.centerX,
required this.centerY,
required this.radius,
this.success = true,
});
factory TargetDetectionResult.failure() {
return TargetDetectionResult(
centerX: 0.5,
centerY: 0.5,
radius: 0.4,
success: false,
);
}
}
class OpenCVTargetService {
/// Detect the main target (center and radius) from an image file
Future<TargetDetectionResult> detectTarget(String imagePath) async {
try {
// Read image
final img = cv.imread(imagePath, flags: cv.IMREAD_COLOR);
if (img.isEmpty) {
return TargetDetectionResult.failure();
}
// Convert to grayscale
final gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY);
// Apply Gaussian blur to reduce noise
final blurred = cv.gaussianBlur(gray, (9, 9), 2, sigmaY: 2);
// Detect circles using Hough Transform
// Parameters need to be tuned for the specific target type
final circles = cv.HoughCircles(
blurred,
cv.HOUGH_GRADIENT,
1, // dp
(img.rows / 16)
.toDouble(), // minDist decreased to allow more rings in same general area
param1: 100, // Canny edge detection
param2:
60, // Accumulator threshold (higher = fewer false circles, more accurate)
minRadius: img.cols ~/ 20,
maxRadius: img.cols ~/ 2,
);
// HoughCircles returns a Mat of shape (1, N, 3) where N is number of circles.
// In opencv_dart, we cannot iterate easily.
// However, we can access data via pointer if needed, or check if Vec3f is supported.
// Given the user report, `at<Vec3f>` likely failed compilation or runtime.
// Let's use a safer approach: assume standard memory layout (x, y, r, x, y, r...).
// Or use `at<double>` carefully.
// Better yet: try to use `circles.data` if available, but it returns a Pointer.
// Let's stick to `at` but use `double` and manual offset if Vec3f fails.
// actually, let's try to trust `at<double>` for flattened access OR `at<Vec3f>`.
// NOTE: `at<Vec3f>` was reported as "method at not defined for VecPoint2f" earlier, NOT for Mat.
// The user error was for `VecPoint2f`. `Mat` definitely has `at`.
// BUT `VecPoint2f` is a List-like structure in Dart wrapper.
// usage of `at` on `VecPoint2f` was the error.
// Here `circles` IS A MAT. So `at` IS defined.
// However, to be safe and robust, and to implement clustering...
if (circles.isEmpty) {
// Try with different parameters if first attempt fails (more lenient)
final looseCircles = cv.HoughCircles(
blurred,
cv.HOUGH_GRADIENT,
1,
(img.rows / 8).toDouble(),
param1: 100,
param2: 40,
minRadius: img.cols ~/ 20,
maxRadius: img.cols ~/ 2,
);
if (looseCircles.isEmpty) {
return TargetDetectionResult.failure();
}
return _findBestConcentricCircles(looseCircles, img.cols, img.rows);
}
return _findBestConcentricCircles(circles, img.cols, img.rows);
} catch (e) {
// print('Error detecting target with OpenCV: $e');
return TargetDetectionResult.failure();
}
}
TargetDetectionResult _findBestConcentricCircles(
cv.Mat circles,
int width,
int height,
) {
if (circles.rows == 0 || circles.cols == 0) {
return TargetDetectionResult.failure();
}
final int numCircles = circles.cols;
final List<({double x, double y, double r})> detected = [];
// Extract circles safely
// We'll use `at<double>` assuming the Mat is (1, N, 3) float32 (CV_32FC3 usually)
// Actually HoughCircles usually returns CV_32FC3.
// So we can access `at<cv.Vec3f>(0, i)`.
// If that fails, we can fall back. But since `Mat` has `at`, it should work unless generic is bad.
// Let's assume it works for Mat but checking boundaries.
// NOTE: If this throws "at not defined" (unlikely for Mat), we'd need another way.
// But since the previous error was on `VecPoint2f` (which is NOT a Mat), this should be fine.
for (int i = 0; i < numCircles; i++) {
// Access using Vec3f if possible, or try to interpret memory
// Using `at<cv.Vec3f>` is the standard way.
final vec = circles.at<cv.Vec3f>(0, i);
detected.add((x: vec.val1, y: vec.val2, r: vec.val3));
}
if (detected.isEmpty) return TargetDetectionResult.failure();
// Cluster circles by center position
// We consider circles "concentric" if their centers are within 5% of image min dimension
final double tolerance = math.min(width, height) * 0.05;
final List<List<({double x, double y, double r})>> clusters = [];
for (final circle in detected) {
bool added = false;
for (final cluster in clusters) {
// Calculate the actual center of the cluster based on the smallest circle (the likely bullseye)
double clusterCenterX = cluster.first.x;
double clusterCenterY = cluster.first.y;
double minRadiusInCluster = cluster.first.r;
for (final c in cluster) {
if (c.r < minRadiusInCluster) {
minRadiusInCluster = c.r;
clusterCenterX = c.x;
clusterCenterY = c.y;
}
}
final dist = math.sqrt(
math.pow(circle.x - clusterCenterX, 2) +
math.pow(circle.y - clusterCenterY, 2),
);
if (dist < tolerance) {
cluster.add(circle);
added = true;
break;
}
}
if (!added) {
clusters.add([circle]);
}
}
// Find the best cluster
// 1. Prefer clusters with more circles (concentric rings)
// 2. Tie-break: closest to image center
List<({double x, double y, double r})> bestCluster = clusters.first;
double bestScore = -1.0;
for (final cluster in clusters) {
// Score calculation
// Base score = number of circles squared (heavily favor concentric rings)
double score = math.pow(cluster.length, 2).toDouble() * 10.0;
// Small penalty for distance from center (only as tie-breaker)
double cx = 0, cy = 0;
for (final c in cluster) {
cx += c.x;
cy += c.y;
}
cx /= cluster.length;
cy /= cluster.length;
final distFromCenter = math.sqrt(
math.pow(cx - width / 2, 2) + math.pow(cy - height / 2, 2),
);
final relDist = distFromCenter / math.min(width, height);
score -=
relDist * 2.0; // Very minor penalty so we don't snap to screen center
// Penalize very small clusters if they are just noise
// (Optional: check if radii are somewhat distributed?)
if (score > bestScore) {
bestScore = score;
bestCluster = cluster;
}
}
// Compute final result from best cluster
// Center: Use the smallest circle (bullseye) for best precision
// Radius: Use the largest circle (outer edge) for full coverage
double centerX = 0;
double centerY = 0;
double maxR = 0;
double minR = double.infinity;
for (final c in bestCluster) {
if (c.r > maxR) {
maxR = c.r;
}
if (c.r < minR) {
minR = c.r;
centerX = c.x;
centerY = c.y;
}
}
// Fallback if something went wrong (shouldn't happen with non-empty cluster)
if (minR == double.infinity) {
centerX = bestCluster.first.x;
centerY = bestCluster.first.y;
}
return TargetDetectionResult(
centerX: centerX / width,
centerY: centerY / height,
radius: maxR / math.min(width, height),
success: true,
);
}
}

View File

@@ -2,9 +2,12 @@ import 'dart:math' as math;
import '../data/models/target_type.dart';
import 'image_processing_service.dart';
import 'opencv_impact_detection_service.dart';
import 'yolo_impact_detection_service.dart';
export 'image_processing_service.dart' show ImpactDetectionSettings, ReferenceImpact, ImpactCharacteristics;
export 'opencv_impact_detection_service.dart' show OpenCVDetectionSettings, OpenCVDetectedImpact;
export 'image_processing_service.dart'
show ImpactDetectionSettings, ReferenceImpact, ImpactCharacteristics;
export 'opencv_impact_detection_service.dart'
show OpenCVDetectionSettings, OpenCVDetectedImpact;
class TargetDetectionResult {
final double centerX; // Relative (0-1)
@@ -52,18 +55,19 @@ class DetectedImpactResult {
class TargetDetectionService {
final ImageProcessingService _imageProcessingService;
final OpenCVImpactDetectionService _opencvService;
final YOLOImpactDetectionService _yoloService;
TargetDetectionService({
ImageProcessingService? imageProcessingService,
OpenCVImpactDetectionService? opencvService,
}) : _imageProcessingService = imageProcessingService ?? ImageProcessingService(),
_opencvService = opencvService ?? OpenCVImpactDetectionService();
YOLOImpactDetectionService? yoloService,
}) : _imageProcessingService =
imageProcessingService ?? ImageProcessingService(),
_opencvService = opencvService ?? OpenCVImpactDetectionService(),
_yoloService = yoloService ?? YOLOImpactDetectionService();
/// Detect target and impacts from an image file
TargetDetectionResult detectTarget(
String imagePath,
TargetType targetType,
) {
TargetDetectionResult detectTarget(String imagePath, TargetType targetType) {
try {
// Detect main target
final mainTarget = _imageProcessingService.detectMainTarget(imagePath);
@@ -84,7 +88,13 @@ class TargetDetectionService {
// Convert impacts to relative coordinates and calculate scores
final detectedImpacts = impacts.map((impact) {
final score = targetType == TargetType.concentric
? _calculateConcentricScore(impact.x, impact.y, centerX, centerY, radius)
? _calculateConcentricScore(
impact.x,
impact.y,
centerX,
centerY,
radius,
)
: _calculateSilhouetteScore(impact.x, impact.y, centerX, centerY);
return DetectedImpactResult(
@@ -149,9 +159,9 @@ class TargetDetectionService {
// Vertical zones
if (dy < -0.25) return 5; // Head zone (top)
if (dy < 0.0) return 5; // Center mass (upper body)
if (dy < 0.15) return 4; // Body
if (dy < 0.35) return 3; // Lower body
if (dy < 0.0) return 5; // Center mass (upper body)
if (dy < 0.15) return 4; // Body
if (dy < 0.35) return 3; // Lower body
return 0; // Outside target
}
@@ -177,7 +187,13 @@ class TargetDetectionService {
return impacts.map((impact) {
final score = targetType == TargetType.concentric
? _calculateConcentricScoreWithRings(
impact.x, impact.y, centerX, centerY, radius, ringCount)
impact.x,
impact.y,
centerX,
centerY,
radius,
ringCount,
)
: _calculateSilhouetteScore(impact.x, impact.y, centerX, centerY);
return DetectedImpactResult(
@@ -221,7 +237,10 @@ class TargetDetectionService {
String imagePath,
List<ReferenceImpact> references,
) {
return _imageProcessingService.analyzeReferenceImpacts(imagePath, references);
return _imageProcessingService.analyzeReferenceImpacts(
imagePath,
references,
);
}
/// Detect impacts based on reference characteristics (calibrated detection)
@@ -245,7 +264,13 @@ class TargetDetectionService {
return impacts.map((impact) {
final score = targetType == TargetType.concentric
? _calculateConcentricScoreWithRings(
impact.x, impact.y, centerX, centerY, radius, ringCount)
impact.x,
impact.y,
centerX,
centerY,
radius,
ringCount,
)
: _calculateSilhouetteScore(impact.x, impact.y, centerX, centerY);
return DetectedImpactResult(
@@ -283,7 +308,13 @@ class TargetDetectionService {
return impacts.map((impact) {
final score = targetType == TargetType.concentric
? _calculateConcentricScoreWithRings(
impact.x, impact.y, centerX, centerY, radius, ringCount)
impact.x,
impact.y,
centerX,
centerY,
radius,
ringCount,
)
: _calculateSilhouetteScore(impact.x, impact.y, centerX, centerY);
return DetectedImpactResult(
@@ -315,9 +346,7 @@ class TargetDetectionService {
}) {
try {
// Convertir les références au format OpenCV
final refPoints = references
.map((r) => (x: r.x, y: r.y))
.toList();
final refPoints = references.map((r) => (x: r.x, y: r.y)).toList();
final impacts = _opencvService.detectFromReferences(
imagePath,
@@ -328,7 +357,13 @@ class TargetDetectionService {
return impacts.map((impact) {
final score = targetType == TargetType.concentric
? _calculateConcentricScoreWithRings(
impact.x, impact.y, centerX, centerY, radius, ringCount)
impact.x,
impact.y,
centerX,
centerY,
radius,
ringCount,
)
: _calculateSilhouetteScore(impact.x, impact.y, centerX, centerY);
return DetectedImpactResult(
@@ -343,4 +378,41 @@ class TargetDetectionService {
return [];
}
}
/// Détecte les impacts en utilisant YOLOv8
Future<List<DetectedImpactResult>> detectImpactsWithYOLO(
String imagePath,
TargetType targetType,
double centerX,
double centerY,
double radius,
int ringCount,
) async {
try {
final impacts = await _yoloService.detectImpacts(imagePath);
return impacts.map((impact) {
final score = targetType == TargetType.concentric
? _calculateConcentricScoreWithRings(
impact.x,
impact.y,
centerX,
centerY,
radius,
ringCount,
)
: _calculateSilhouetteScore(impact.x, impact.y, centerX, centerY);
return DetectedImpactResult(
x: impact.x,
y: impact.y,
radius: impact.radius,
suggestedScore: score,
);
}).toList();
} catch (e) {
print('Erreur détection YOLOv8: $e');
return [];
}
}
}

View File

@@ -0,0 +1,174 @@
import 'dart:io';
import 'dart:math' as math;
import 'dart:typed_data';
import 'package:tflite_flutter/tflite_flutter.dart';
import 'package:image/image.dart' as img;
import 'target_detection_service.dart';
class YOLOImpactDetectionService {
Interpreter? _interpreter;
static const String modelPath = 'assets/models/yolov11n_impact.tflite';
static const String labelsPath = 'assets/models/labels.txt';
Future<void> init() async {
if (_interpreter != null) return;
try {
// Try loading the specific YOLOv11 model first, fallback to v8 if not found
try {
_interpreter = await Interpreter.fromAsset(modelPath);
} catch (e) {
print('YOLOv11 model not found at $modelPath, trying YOLOv8 fallback');
_interpreter = await Interpreter.fromAsset(
'assets/models/yolov8n_impact.tflite',
);
}
print('YOLO Interpreter loaded successfully');
} catch (e) {
print('Error loading YOLO model: $e');
}
}
Future<List<DetectedImpactResult>> detectImpacts(String imagePath) async {
if (_interpreter == null) await init();
if (_interpreter == null) return [];
try {
final bytes = File(imagePath).readAsBytesSync();
final originalImage = img.decodeImage(bytes);
if (originalImage == null) return [];
// YOLOv8/v11 usually takes 640x640
const int inputSize = 640;
final resizedImage = img.copyResize(
originalImage,
width: inputSize,
height: inputSize,
);
// Prepare input tensor
var input = _imageToByteListFloat32(resizedImage, inputSize);
// Raw YOLO output shape usually [1, 4 + num_classes, 8400]
// For single class "impact", it's [1, 5, 8400]
var output = List<double>.filled(1 * 5 * 8400, 0).reshape([1, 5, 8400]);
_interpreter!.run(input, output);
return _processOutput(
output[0],
originalImage.width,
originalImage.height,
);
} catch (e) {
print('Error during YOLO inference: $e');
return [];
}
}
List<DetectedImpactResult> _processOutput(
List<List<double>> output,
int imgWidth,
int imgHeight,
) {
final List<_Detection> candidates = [];
const double threshold = 0.25;
// output is [5, 8400] -> [x, y, w, h, conf]
for (int i = 0; i < 8400; i++) {
final double confidence = output[4][i];
if (confidence > threshold) {
candidates.add(
_Detection(
x: output[0][i],
y: output[1][i],
w: output[2][i],
h: output[3][i],
confidence: confidence,
),
);
}
}
// Apply Non-Max Suppression (NMS)
final List<_Detection> suppressed = _nms(candidates);
return suppressed
.map(
(det) => DetectedImpactResult(
x: det.x / 640.0,
y: det.y / 640.0,
radius: 5.0,
suggestedScore: 0,
),
)
.toList();
}
List<_Detection> _nms(List<_Detection> detections) {
if (detections.isEmpty) return [];
// Sort by confidence descending
detections.sort((a, b) => b.confidence.compareTo(a.confidence));
final List<_Detection> selected = [];
final List<bool> active = List.filled(detections.length, true);
for (int i = 0; i < detections.length; i++) {
if (!active[i]) continue;
selected.add(detections[i]);
for (int j = i + 1; j < detections.length; j++) {
if (!active[j]) continue;
if (_iou(detections[i], detections[j]) > 0.45) {
active[j] = false;
}
}
}
return selected;
}
double _iou(_Detection a, _Detection b) {
final double areaA = a.w * a.h;
final double areaB = b.w * b.h;
final double x1 = math.max(a.x - a.w / 2, b.x - b.w / 2);
final double y1 = math.max(a.y - a.h / 2, b.y - b.h / 2);
final double x2 = math.min(a.x + a.w / 2, b.x + b.w / 2);
final double y2 = math.min(a.y + a.h / 2, b.y + b.h / 2);
final double intersection = math.max(0.0, x2 - x1) * math.max(0.0, y2 - y1);
return intersection / (areaA + areaB - intersection);
}
Uint8List _imageToByteListFloat32(img.Image image, int inputSize) {
var convertedBytes = Float32List(1 * inputSize * inputSize * 3);
var buffer = Float32List.view(convertedBytes.buffer);
int pixelIndex = 0;
for (int i = 0; i < inputSize; i++) {
for (int j = 0; j < inputSize; j++) {
var pixel = image.getPixel(j, i);
buffer[pixelIndex++] = (pixel.r / 255.0);
buffer[pixelIndex++] = (pixel.g / 255.0);
buffer[pixelIndex++] = (pixel.b / 255.0);
}
}
return convertedBytes.buffer.asUint8List();
}
}
class _Detection {
final double x, y, w, h, confidence;
_Detection({
required this.x,
required this.y,
required this.w,
required this.h,
required this.confidence,
});
}

View File

@@ -7,6 +7,7 @@ list(APPEND FLUTTER_PLUGIN_LIST
)
list(APPEND FLUTTER_FFI_PLUGIN_LIST
tflite_flutter
)
set(PLUGIN_BUNDLED_LIBRARIES)

View File

@@ -25,6 +25,14 @@ packages:
url: "https://pub.dev"
source: hosted
version: "2.1.2"
change_case:
dependency: transitive
description:
name: change_case
sha256: e41ef3df58521194ef8d7649928954805aeb08061917cf658322305e61568003
url: "https://pub.dev"
source: hosted
version: "2.2.0"
characters:
dependency: transitive
description:
@@ -61,10 +69,10 @@ packages:
dependency: transitive
description:
name: cross_file
sha256: "701dcfc06da0882883a2657c445103380e53e647060ad8d9dfb710c100996608"
sha256: "28bb3ae56f117b5aec029d702a90f57d285cd975c3c5c281eaca38dbc47c5937"
url: "https://pub.dev"
source: hosted
version: "0.3.5+1"
version: "0.3.5+2"
crypto:
dependency: transitive
description:
@@ -81,6 +89,14 @@ packages:
url: "https://pub.dev"
source: hosted
version: "1.0.8"
dartcv4:
dependency: transitive
description:
name: dartcv4
sha256: "43dba49162662f3b6e3daf5a95d071429365e2f1ada67d412b851fc9be442e58"
url: "https://pub.dev"
source: hosted
version: "2.2.1+1"
equatable:
dependency: transitive
description:
@@ -200,6 +216,14 @@ packages:
url: "https://pub.dev"
source: hosted
version: "2.1.3"
google_mlkit_document_scanner:
dependency: "direct main"
description:
name: google_mlkit_document_scanner
sha256: "67428ddb853880c8185049a5834cd328e6420921a74786f6aadee0b76f8536bd"
url: "https://pub.dev"
source: hosted
version: "0.2.1"
hooks:
dependency: transitive
description:
@@ -244,10 +268,10 @@ packages:
dependency: transitive
description:
name: image_picker_android
sha256: "5e9bf126c37c117cf8094215373c6d561117a3cfb50ebc5add1a61dc6e224677"
sha256: "518a16108529fc18657a3e6dde4a043dc465d16596d20ab2abd49a4cac2e703d"
url: "https://pub.dev"
source: hosted
version: "0.8.13+10"
version: "0.8.13+13"
image_picker_for_web:
dependency: transitive
description:
@@ -260,10 +284,10 @@ packages:
dependency: transitive
description:
name: image_picker_ios
sha256: "956c16a42c0c708f914021666ffcd8265dde36e673c9fa68c81f7d085d9774ad"
sha256: b9c4a438a9ff4f60808c9cf0039b93a42bb6c2211ef6ebb647394b2b3fa84588
url: "https://pub.dev"
source: hosted
version: "0.8.13+3"
version: "0.8.13+6"
image_picker_linux:
dependency: transitive
description:
@@ -384,6 +408,14 @@ packages:
url: "https://pub.dev"
source: hosted
version: "0.17.4"
native_toolchain_cmake:
dependency: transitive
description:
name: native_toolchain_cmake
sha256: fe40e8483183ced98e851e08a9cd2a547fd412cccab98277aa23f2377e43d66f
url: "https://pub.dev"
source: hosted
version: "0.2.4"
nested:
dependency: transitive
description:
@@ -392,6 +424,14 @@ packages:
url: "https://pub.dev"
source: hosted
version: "1.0.0"
opencv_dart:
dependency: "direct main"
description:
name: opencv_dart
sha256: c2b7cc614cad69c2857e9b684e3066af662a03fe7100f4dc9a630e81ad42103a
url: "https://pub.dev"
source: hosted
version: "2.2.1+1"
path:
dependency: "direct main"
description:
@@ -496,6 +536,14 @@ packages:
url: "https://pub.dev"
source: hosted
version: "2.2.0"
quiver:
dependency: transitive
description:
name: quiver
sha256: ea0b925899e64ecdfbf9c7becb60d5b50e706ade44a85b2363be2a22d88117d2
url: "https://pub.dev"
source: hosted
version: "3.2.2"
sky_engine:
dependency: transitive
description: flutter
@@ -613,6 +661,14 @@ packages:
url: "https://pub.dev"
source: hosted
version: "0.7.9"
tflite_flutter:
dependency: "direct main"
description:
name: tflite_flutter
sha256: ffb8651fdb116ab0131d6dc47ff73883e0f634ad1ab12bb2852eef1bbeab4a6a
url: "https://pub.dev"
source: hosted
version: "0.10.4"
typed_data:
dependency: transitive
description:
@@ -679,4 +735,4 @@ packages:
version: "3.1.3"
sdks:
dart: ">=3.12.0-35.0.dev <4.0.0"
flutter: ">=3.35.0"
flutter: ">=3.38.1"

View File

@@ -35,11 +35,11 @@ dependencies:
# Use with the CupertinoIcons class for iOS style icons.
cupertino_icons: ^1.0.8
# Image processing with OpenCV (désactivé temporairement - problèmes de build Windows)
# opencv_dart: ^2.1.0
opencv_dart: ^2.1.0
# Image capture from camera/gallery
image_picker: ^1.0.7
image_picker: ^1.2.1
google_mlkit_document_scanner: ^0.2.0
# Local database for history
sqflite: ^2.3.2
@@ -64,6 +64,9 @@ dependencies:
# Image processing for impact detection
image: ^4.1.7
# Machine Learning for YOLOv8
tflite_flutter: ^0.10.4
dev_dependencies:
flutter_test:
sdk: flutter

View File

@@ -0,0 +1,12 @@
import 'package:opencv_dart/opencv_dart.dart' as cv;
void main() {
var p1 = cv.VecPoint.fromList([cv.Point(0, 0), cv.Point(1, 1)]);
var p2 = cv.VecPoint2f.fromList([cv.Point2f(0, 0), cv.Point2f(1, 1)]);
// Is it p1.mat ?
// Or is it cv.findHomography(p1, p1) but actually needs specific types ?
cv.Mat mat1 = cv.Mat.fromVec(p1);
cv.Mat mat2 = cv.Mat.fromVec(p2);
cv.findHomography(mat1, mat2);
}

View File

@@ -0,0 +1,7 @@
import 'package:opencv_dart/opencv_dart.dart' as cv;
void main() {
print(cv.approxPolyDP);
print(cv.arcLength);
print(cv.contourArea);
}

View File

@@ -0,0 +1,5 @@
import 'package:opencv_dart/opencv_dart.dart' as cv;
void main() {
print(cv.findHomography);
}

View File

@@ -7,6 +7,7 @@ list(APPEND FLUTTER_PLUGIN_LIST
)
list(APPEND FLUTTER_FFI_PLUGIN_LIST
tflite_flutter
)
set(PLUGIN_BUNDLED_LIBRARIES)