This project calculates foot sizes using a reference coin and generates a PDF report with measurements of detected feet from an image. The report includes marked images and calculated dimensions, providing a detailed analysis of the detected objects.
- Background Removal: Removes the background from the input image.
- Object Detection: Detects feet and coins using YOLO models.
- Feet Measurements: Calculates the width and height of the detected feet in centimeters.
- PDF Report Generation: Creates a PDF report with marked images and measurements.
- Python 3.7+
- OpenCV
- NumPy
- fpdf
- matplotlib
- ultralytics
- flask
-
Clone the repository:
git clone https://github.com/Xer0bit/BRP-SizeMeasure cd BRP-SizeMeasure -
Create a virtual environment (optional but recommended):
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install the required packages:
pip install -r requirements.txt
-
Prepare your YOLO models: Ensure you have the YOLO models for detecting feet and coins in the
models/directory:models/feet.ptmodels/coin.pt
-
Run the script:
from image_processing import remove_background, detect_objects from pdf_report import generate_pdf_report # Example usage original_image_path = 'path/to/original/image.jpg' detection_results = detect_objects(remove_background(original_image_path)) output_pdf_path = 'path/to/output/report.pdf' generate_pdf_report(detection_results, output_pdf_path, original_image_path)
-
View the generated PDF: The PDF report will be saved to the path specified in
output_pdf_path.
models/: Directory containing the YOLO model files.image_processing.py: Script for image loading, background removal, and object detection.pdf_report.py: Script for generating the PDF report.requirements.txt: List of required Python packages.README.md: Project documentation.example_usage.py: Example script demonstrating how to use the project.
For using the YOLO models, ensure that you have the appropriate .pt files for both feet and coins. You can download the pre-trained models from the MODELS.
This project is licensed under the MIT License. See the LICENSE file for details.

