Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 24 additions & 38 deletions REFACTORING_REPORT.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,15 +31,15 @@ MeshRF is a full-stack RF propagation and link analysis application for LoRa mes
| 3 | `src/components/Map/LinkAnalysisPanel.jsx` | 643 | HIGH | Pending |
| 4 | `src/components/Map/UI/SiteAnalysisResultsPanel.jsx` | 609 | HIGH | Pending |
| 5 | `src/components/Map/OptimizationLayer.jsx` | 517 | HIGH | Pending |
| 6 | `rf-engine/server.py` | 475 | HIGH | Pending |
| 6 | `rf-engine/server.py` | 475 | HIGH | **REFACTORED** |
| 7 | `src/components/Map/UI/NodeManager.jsx` | 440 | MEDIUM | Pending |
| 8 | `src/components/Map/OptimizationResultsPanel.jsx` | 435 | MEDIUM | Pending |
| 9 | `src/components/Map/LinkLayer.jsx` | 429 | MEDIUM | Pending |
| 10 | `rf-engine/tasks/viewshed.py` | 398 | MEDIUM | Pending |
| 11 | `src/utils/rfMath.js` | 366 | LOW | Pending |
| 12 | `src/components/Map/BatchNodesPanel.jsx` | 354 | MEDIUM | Pending |
| 13 | `src/hooks/useViewshedTool.js` | 343 | MEDIUM | Pending |
| 14 | `rf-engine/tile_manager.py` | 334 | MEDIUM | Pending |
| 14 | `rf-engine/tile_manager.py` | 334 | MEDIUM | **REFACTORED** |
| 15 | `src/components/Map/BatchProcessing.jsx` | 321 | LOW | Pending |
| 16 | `src/components/Map/UI/GuidanceOverlays.jsx` | 318 | LOW | Pending |
| 17 | `src/context/RFContext.jsx` | 307 | MEDIUM | **REFACTORED** (Facade) |
Expand Down Expand Up @@ -161,27 +161,17 @@ src/components/Map/

#### 6. `rf-engine/server.py` — 475 lines

**What it does**: Main FastAPI application with endpoints for link analysis, elevation lookups, terrain tile serving, and async Celery task management.
**Status**: Refactored (Phase 2)
- **Extracted Routers**:
- `routers/analysis.py`: Link analysis endpoint.
- `routers/elevation.py`: Elevation and tile endpoints.
- `routers/tasks.py`: Async task management.
- `routers/optimization.py`: Optimization and export endpoints.
- **Shared Dependencies**:
- `dependencies.py`: Handles Redis, TileManager, and Limiter instances.
- **Result**: `server.py` is now a minimal entry point focusing on app setup and middleware.

**Logical sections**:
1. App setup, CORS, rate limiting (lines 1–45)
2. Pydantic request models (lines 44–75)
3. Link analysis endpoints (lines 71–137)
4. Elevation endpoints (lines 138–228)
5. Async task endpoints (lines 231–475)

**Suggested split**:

```
rf-engine/
├── server.py (~80 lines) — app creation, middleware, router registration
├── schemas.py (~80 lines) — all Pydantic models (consolidate existing)
├── dependencies.py (~50 lines) — Redis, tile_manager DI
└── routers/
├── analysis.py (~120 lines) — link analysis endpoints
├── elevation.py (~100 lines) — elevation + tile serving
└── tasks.py (~150 lines) — async task submission and polling
```
---

---

Expand Down Expand Up @@ -258,17 +248,14 @@ rf-engine/

#### 11. `rf-engine/tile_manager.py` — 334 lines

**What it does**: High-performance elevation tile caching with request coalescing, thread pool management, Redis TTL caching, and interpolation.
**Status**: Refactored (Phase 2)
- **Extracted Components**:
- `rf-engine/cache_layer.py`: Encapsulates Redis caching operations.
- `rf-engine/elevation_client.py`: Manages OpenTopoData API interactions and retries.
- `rf-engine/grid_processor.py`: Contains static methods for grid interpolation and elevation extraction.
- **Result**: `TileManager` is now a clean orchestrator class.

**Suggested split**:

```
rf-engine/
├── tile_manager.py (~120 lines) — orchestration
├── cache_layer.py (~80 lines) — Redis caching logic
├── elevation_client.py (~80 lines) — API fetch implementation
└── grid_processor.py (~80 lines) — interpolation and grid ops
```
---

---

Expand Down Expand Up @@ -323,15 +310,14 @@ src/hooks/

---

### Phase 2 — Backend API Structure (NEXT)
### Phase 2 — Backend API Structure (COMPLETED)

Reorganize `server.py` into FastAPI routers — this is low-risk since Python imports are explicit and easy to verify:
3. **server.py** (475 → ~80 lines): Refactored into `routers/` directory with `analysis.py`, `elevation.py`, `tasks.py`, `optimization.py`.
4. **tile_manager.py** (334 → ~120 lines): Extracted `cache_layer.py`, `elevation_client.py`, `grid_processor.py`.

3. **server.py** (475 → ~80 lines): Create `routers/` directory with `analysis.py`, `elevation.py`, `tasks.py`.
4. **tile_manager.py** (334 → ~120 lines): Extract `cache_layer.py`, `elevation_client.py`, `grid_processor.py`.
**Status**: Verified with tests and import checks.

**Expected effort**: 1–2 days
**Risk**: Low — FastAPI router pattern is well-defined.
---

---

Expand Down
22 changes: 22 additions & 0 deletions rf-engine/cache_layer.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
import msgpack
import redis

class CacheLayer:
"""
Handles Redis caching operations for elevation tiles.
"""
def __init__(self, redis_client: redis.Redis, ttl: int = 30 * 24 * 60 * 60):
self.redis = redis_client
self.ttl = ttl

def get_tile(self, key: str):
"""Retrieves a tile from Redis cache."""
packed = self.redis.get(key)
if packed:
return msgpack.unpackb(packed)
return None

def cache_tile(self, key: str, data: dict):
"""Stores a tile in Redis cache with TTL."""
packed = msgpack.packb(data)
self.redis.setex(key, self.ttl, packed)
24 changes: 24 additions & 0 deletions rf-engine/dependencies.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
import os
import redis
from tile_manager import TileManager
from optimization_service import OptimizationService
from slowapi import Limiter
from slowapi.util import get_remote_address

limiter = Limiter(key_func=get_remote_address)

REDIS_HOST = os.environ.get("REDIS_HOST", "redis")
REDIS_PORT = int(os.environ.get("REDIS_PORT", 6379))
REDIS_PASSWORD = os.environ.get("REDIS_PASSWORD", "changeme")

# Initialize Redis Client
redis_client = redis.Redis(
host=REDIS_HOST,
port=REDIS_PORT,
db=0,
password=REDIS_PASSWORD
)

# Initialize Core Services
tile_manager = TileManager(redis_client)
optimization_service = OptimizationService(tile_manager)
111 changes: 111 additions & 0 deletions rf-engine/elevation_client.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
import os
import requests
import numpy as np
import logging
import mercantile
from concurrent.futures import ThreadPoolExecutor, TimeoutError
from requests.adapters import HTTPAdapter

logger = logging.getLogger(__name__)

class ElevationClient:
"""
Handles interactions with OpenTopoData API for elevation tiles.
"""
def __init__(self, max_workers=30):
# Connection pooling for high concurrency
self.session = requests.Session()
adapter = HTTPAdapter(pool_connections=50, pool_maxsize=50)
self.session.mount('http://', adapter)
self.session.mount('https://', adapter)

self.executor = ThreadPoolExecutor(max_workers=max_workers, thread_name_prefix='elev_client_')

self.base_url = os.environ.get('ELEVATION_API_URL', 'http://opentopodata:5000')
self.dataset = os.environ.get('ELEVATION_DATASET', 'srtm30m')

def fetch_tile(self, x, y, z):
"""
Fetch elevation data from OpenTopoData API.
"""
bounds = mercantile.bounds(x, y, z)
lat_min, lat_max = bounds.south, bounds.north
lon_min, lon_max = bounds.west, bounds.east

# Create 16x16 grid of coordinates
lats = np.linspace(lat_min, lat_max, 16)
lons = np.linspace(lon_min, lon_max, 16)

lat_grid, lon_grid = np.meshgrid(lats, lons)
lat_flat = lat_grid.flatten()
lon_flat = lon_grid.flatten()

# OpenTopoData supports up to 100 locations per request
# We have 256 points (16x16), so split into 3 batches: 100, 100, 56
batch_size = 100

batches = []
for i in range(0, len(lat_flat), batch_size):
batch_lats = lat_flat[i:i + batch_size]
batch_lons = lon_flat[i:i + batch_size]
locations = "|".join([f"{lat},{lon}" for lat, lon in zip(batch_lats, batch_lons)])
batches.append(locations)

def fetch_batch_task(locations, batch_num):
try:
url = f"{self.base_url}/v1/{self.dataset}"
response = self.session.get(
url,
params={'locations': locations},
timeout=10
)

if response.status_code == 200:
data = response.json()
if data.get('status') == 'OK' and 'results' in data:
return [result.get('elevation', 0.0) for result in data['results']]
else:
error_msg = data.get('error', 'Unknown error')
logger.error(f"OpenTopoData batch {batch_num} error: {error_msg}")
return None
elif response.status_code == 404:
logger.error(f"Dataset '{self.dataset}' not found. Check ELEVATION_DATASET env var and data files.")
return None
else:
logger.warning(f"OpenTopoData batch {batch_num} failed with status {response.status_code}")
return None

except requests.exceptions.Timeout:
logger.error(f"OpenTopoData request timed out for batch {batch_num}")
return None
except requests.exceptions.ConnectionError:
logger.error(f"Cannot connect to OpenTopoData at {self.base_url}. Is the container running?")
return None
except Exception as e:
logger.error(f"Exception fetching OpenTopoData batch {batch_num}: {e}")
return None

# Execute batches in parallel
futures = [self.executor.submit(fetch_batch_task, locs, i) for i, locs in enumerate(batches)]

all_elevations = []
for future in futures:
try:
batch_result = future.result(timeout=30)
except (TimeoutError, Exception) as e:
logger.error(f"Tile fetch timed out or failed: {e}")
return None
if batch_result is None:
return None
all_elevations.extend(batch_result)

if len(all_elevations) == 256:
logger.info(f"Successfully fetched elevation data from OpenTopoData ({self.dataset}): min={min(all_elevations):.1f}m, max={max(all_elevations):.1f}m")
return {"elevation": all_elevations}
else:
logger.error(f"Expected 256 elevation points, got {len(all_elevations)}")
return None

def shutdown(self):
self.executor.shutdown(wait=False)
self.session.close()
77 changes: 77 additions & 0 deletions rf-engine/grid_processor.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
import numpy as np
import scipy.ndimage
import mercantile

class GridProcessor:
"""
Handles grid interpolation and elevation extraction logic.
"""

@staticmethod
def get_interpolated_grid(tile_data, size=256):
"""
Returns a (size, size) numpy array of elevation data for the tile.
Upscales the low-res 16x16 fetched data.
"""
if not tile_data or 'elevation' not in tile_data:
return np.zeros((size, size))

raw_elev = np.array(tile_data['elevation'])
if raw_elev.size != 16*16:
return np.zeros((size, size))

grid_16 = raw_elev.reshape((16, 16)).T
grid_16 = np.flipud(grid_16)

zoom_factor = size / 16.0
high_res_grid = scipy.ndimage.zoom(grid_16, zoom_factor, order=1)

return high_res_grid

@staticmethod
def extract_elevation_from_tile(tile_data, lat, lon, tile):
"""
Performs bilinear interpolation on the 16x16 grid to find elevation at lat, lon.
"""
if not tile_data or 'elevation' not in tile_data:
return 0.0

raw_elev = np.array(tile_data['elevation'])
if raw_elev.size != 256:
return 0.0

grid = raw_elev.reshape((16, 16))

bounds = mercantile.bounds(tile)
lat_min, lat_max = bounds.south, bounds.north
lon_min, lon_max = bounds.west, bounds.east

if lat_max == lat_min or lon_max == lon_min:
return 0.0

u = (lat - lat_min) / (lat_max - lat_min) * 15.0
v = (lon - lon_min) / (lon_max - lon_min) * 15.0

u = np.clip(u, 0, 15)
v = np.clip(v, 0, 15)

i = int(np.floor(u))
j = int(np.floor(v))

u_ratio = u - i
v_ratio = v - j

i_next = min(i + 1, 15)
j_next = min(j + 1, 15)

p00 = grid[j, i]
p10 = grid[j, i_next]
p01 = grid[j_next, i]
p11 = grid[j_next, i_next]

val_j = (p00 * (1 - u_ratio)) + (p10 * u_ratio)
val_jnext = (p01 * (1 - u_ratio)) + (p11 * u_ratio)

final_elev = (val_j * (1 - v_ratio)) + (val_jnext * v_ratio)

return float(final_elev)
Empty file added rf-engine/routers/__init__.py
Empty file.
Loading
Loading