16 Commits

Author SHA1 Message Date
9e364af93e Update README.md 2026-01-14 15:06:28 +01:00
d88c572b56 Update README.md 2026-01-14 12:11:18 +01:00
d8a9f80df6 Update README.md 2026-01-14 11:49:20 +01:00
cac84051e8 Update backup.py 2026-01-14 11:44:10 +01:00
e981912836 Remove legacy put_stream method
Removed legacy chunked upload method 'put_stream' in favor of 'put_file' for better proxy compatibility.
2026-01-14 11:43:38 +01:00
e5b7f9c373 Update version in manifest.json to 0.2.0 2026-01-14 11:42:48 +01:00
88e7bb8d47 Update project name to 'Owncloud Backup Homeassistant' 2026-01-14 11:42:24 +01:00
ca49174419 Revise README for project title and version update
Updated project title, version status, and added sections on upload reliability and troubleshooting.
2026-01-14 11:42:03 +01:00
53473f1f0f Update CHANGELOG for version 0.2.0
Added new features and fixes for version 0.2.0, including improved compatibility and upload reliability.
2026-01-14 11:40:56 +01:00
0022558d25 Update issue tracker URL in manifest.json
Updated the issue tracker URL in the manifest file.
2026-01-14 10:50:01 +01:00
b59d84bacc Update manifest.json 2026-01-14 10:49:27 +01:00
4171159264 Implement long operation timeouts for WebDAV client
Added a timeout for long WebDAV operations to improve reliability.
2026-01-14 10:44:49 +01:00
a67a631c99 Implement spooling of byte stream to temp file
Added functionality to spool an async byte stream to a temporary file for improved upload handling.
2026-01-14 10:44:24 +01:00
99c362b6d4 Add SPOOL_FLUSH_BYTES constant for uploads
Added constant for spooling to temporary file during uploads.
2026-01-14 10:44:00 +01:00
76715585ab Update manifest.json for version 0.1.1-alpha
Updated version number and links in manifest.
2026-01-14 10:43:18 +01:00
15e6ae9ab7 Update CHANGELOG for version 0.1.1-alpha
Improved upload reliability and adjusted client timeout for WebDAV.
2026-01-14 10:42:24 +01:00
6 changed files with 253 additions and 56 deletions

View File

@@ -4,6 +4,21 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning. The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [0.2.0] - 2026-01-14
### Added
- Improved cross-version compatibility with Home Assistant backup metadata by normalizing backup schema fields (e.g., `addons`, `database_included`, etc.).
- More robust metadata serialization for `AgentBackup` across Home Assistant versions (supports different serialization methods).
### Fixed
- Improved upload reliability by spooling backup streams to a temporary file and uploading with Content-Length (avoids chunked WebDAV uploads that may cause reverse proxy 504 timeouts).
- Added non-restrictive client timeouts for long-running WebDAV operations to prevent client-side aborts.
- Fixed backup listing failures caused by missing expected metadata keys in different Home Assistant versions.
## [0.1.1-alpha] - 2026-01-14
### Fixed
- Improved upload reliability by spooling backup streams to a temporary file and uploading with Content-Length (avoids chunked WebDAV uploads that may cause reverse proxy 504 timeouts).
- Set a non-restrictive client timeout for WebDAV PUT requests to prevent client-side premature aborts on slow connections.
## [0.1.0-alpha] - 2026-01-14 ## [0.1.0-alpha] - 2026-01-14
### Added ### Added
- Initial alpha release - Initial alpha release

View File

@@ -1,4 +1,10 @@
# Owncloud Backup Homeassistant by René Bachmann # Owncloud Backup Homeassistant
[![Latest Release](https://img.shields.io/github/v/release/bahmcloud/owncloud-backup-ha?style=flat-square)](https://github.com/bahmcloud/owncloud-backup-ha/releases)
[![HACS](https://img.shields.io/badge/HACS-Custom-orange.svg?style=flat-square)](https://hacs.xyz/)
[![License](https://img.shields.io/github/license/bahmcloud/owncloud-backup-ha?style=flat-square)](LICENSE)
[![Open in HACS](https://my.home-assistant.io/badges/hacs_repository.svg)](https://my.home-assistant.io/redirect/hacs_repository/?owner=bahmcloud&repository=owncloud-backup-ha&category=integration)
Home Assistant custom integration that adds **ownCloud (Classic/Server)** as a **Backup Location / Backup Agent** using the official **WebDAV** interface. Home Assistant custom integration that adds **ownCloud (Classic/Server)** as a **Backup Location / Backup Agent** using the official **WebDAV** interface.
@@ -8,8 +14,8 @@ This integration allows you to:
- download and **restore** backups via the Home Assistant UI - download and **restore** backups via the Home Assistant UI
- authenticate using **either** an ownCloud **App Password** (recommended for 2FA) **or** the regular account password - authenticate using **either** an ownCloud **App Password** (recommended for 2FA) **or** the regular account password
> **Status:** `0.1.0-alpha` > **Status:** `0.2.0`
> This is an early alpha release. Please test on a non-critical system first. > This release focuses on reliability and compatibility across Home Assistant versions.
--- ---
@@ -28,6 +34,15 @@ This integration allows you to:
- ✅ English UI & documentation - ✅ English UI & documentation
- ✅ HACS-ready repository structure - ✅ HACS-ready repository structure
### Upload reliability (important)
To improve reliability behind reverse proxies and avoid WebDAV timeouts with chunked uploads,
the integration **spools the backup to a temporary file** and then uploads it with a proper
**Content-Length** header.
### Home Assistant compatibility
Home Assistant has evolved its backup metadata schema over time. This integration normalizes
backup metadata keys to remain compatible across multiple Home Assistant versions.
--- ---
## Requirements ## Requirements
@@ -41,12 +56,13 @@ This integration allows you to:
### Add as a custom repository ### Add as a custom repository
1. In Home Assistant: **HACS → Integrations → ⋮ → Custom repositories** 1. In Home Assistant: **HACS → Integrations → ⋮ → Custom repositories**
2. Add your repository URL 2. Add repository URL:
3. Category: **Integration** ```
4. Install **owncloud-backup-ha** https://github.com/bahmcloud/owncloud-backup-ha
5. Restart Home Assistant ```
4. Category: **Integration**
> This repository includes a `hacs.json` and follows the required `custom_components/` layout. 5. Install **owncloud-backup-ha**
6. Restart Home Assistant
--- ---
@@ -101,6 +117,15 @@ Home Assistant will download the `.tar` from ownCloud using the Backup Agent API
## Troubleshooting ## Troubleshooting
### "Upload failed" / HTTP 504 (Gateway Timeout)
A 504 typically indicates a reverse proxy / gateway timeout (e.g., Nginx/Traefik/Cloudflare).
This integration uploads with Content-Length (non-chunked) for better compatibility.
If you still see 504:
- Increase proxy timeouts (e.g. `proxy_read_timeout`, `proxy_send_timeout` in Nginx)
- Ensure large uploads are allowed (`client_max_body_size` in Nginx)
- Avoid buffering restrictions for WebDAV endpoints
### "Cannot connect" ### "Cannot connect"
- Check your **Base URL** - Check your **Base URL**
- Make sure the ownCloud user can access WebDAV - Make sure the ownCloud user can access WebDAV
@@ -112,12 +137,6 @@ Home Assistant will download the `.tar` from ownCloud using the Backup Agent API
- install the CA properly, or - install the CA properly, or
- temporarily disable **Verify SSL** (not recommended for production) - temporarily disable **Verify SSL** (not recommended for production)
### Missing backups in list
- Ensure the configured backup folder is correct
- Check that ownCloud contains either:
- `.json` metadata files (preferred), or
- `.tar` files (fallback)
--- ---
## Security notes ## Security notes

View File

@@ -3,7 +3,10 @@ from __future__ import annotations
import asyncio import asyncio
import json import json
import logging import logging
import os
import tempfile
from collections.abc import AsyncIterator, Callable, Coroutine from collections.abc import AsyncIterator, Callable, Coroutine
from dataclasses import asdict, is_dataclass
from typing import Any from typing import Any
from homeassistant.components.backup import ( from homeassistant.components.backup import (
@@ -12,7 +15,6 @@ from homeassistant.components.backup import (
BackupAgentError, BackupAgentError,
BackupNotFound, BackupNotFound,
) )
from homeassistant.core import HomeAssistant, callback from homeassistant.core import HomeAssistant, callback
from .const import ( from .const import (
@@ -20,6 +22,7 @@ from .const import (
DATA_CLIENT, DATA_CLIENT,
DOMAIN, DOMAIN,
META_SUFFIX, META_SUFFIX,
SPOOL_FLUSH_BYTES,
TAR_PREFIX, TAR_PREFIX,
TAR_SUFFIX, TAR_SUFFIX,
) )
@@ -36,14 +39,112 @@ def _make_meta_name(backup_id: str) -> str:
return f"{TAR_PREFIX}{backup_id}{META_SUFFIX}" return f"{TAR_PREFIX}{backup_id}{META_SUFFIX}"
def _normalize_backup_dict(d: dict[str, Any]) -> dict[str, Any]:
"""Normalize backup metadata to satisfy multiple HA schema versions."""
d = dict(d)
# Identity fields (varies by HA versions)
d.setdefault("backup_id", d.get("slug", ""))
d.setdefault("slug", d.get("backup_id", ""))
# Presentation fields
d.setdefault("name", f"ownCloud backup ({d.get('backup_id') or d.get('slug') or 'unknown'})")
d.setdefault("date", d.get("created_at", ""))
d.setdefault("size", 0)
d.setdefault("protected", False)
d.setdefault("compressed", True)
d.setdefault("extra_metadata", {})
# Content selections
d.setdefault("addons", [])
d.setdefault("folders", [])
# Older schema booleans
d.setdefault("database", True)
d.setdefault("homeassistant", True)
# Newer schema booleans
d.setdefault("database_included", d.get("database", True))
d.setdefault("homeassistant_included", d.get("homeassistant", True))
d.setdefault("addons_included", bool(d.get("addons", [])))
d.setdefault("folders_included", bool(d.get("folders", [])))
# Keep old keys consistent
d["database"] = bool(d.get("database_included", True))
d["homeassistant"] = bool(d.get("homeassistant_included", True))
if not isinstance(d.get("addons"), list):
d["addons"] = []
if not isinstance(d.get("folders"), list):
d["folders"] = []
return d
def _agentbackup_to_dict(backup: AgentBackup) -> dict[str, Any]:
"""Serialize AgentBackup in a HA-version-independent way."""
if hasattr(backup, "to_dict"):
raw = backup.to_dict() # type: ignore[assignment]
elif hasattr(backup, "as_dict"):
raw = backup.as_dict() # type: ignore[assignment]
elif is_dataclass(backup):
raw = asdict(backup)
else:
raw = {k: v for k, v in vars(backup).items() if not k.startswith("_")}
return _normalize_backup_dict(raw)
def _agentbackup_from_dict(d: dict[str, Any]) -> AgentBackup: def _agentbackup_from_dict(d: dict[str, Any]) -> AgentBackup:
"""Best-effort create AgentBackup across HA versions.""" """Create AgentBackup from dict and ensure required keys exist."""
d = _normalize_backup_dict(d)
from_dict = getattr(AgentBackup, "from_dict", None) from_dict = getattr(AgentBackup, "from_dict", None)
if callable(from_dict): if callable(from_dict):
return from_dict(d) # type: ignore[misc] return from_dict(d) # type: ignore[misc]
return AgentBackup(**d) # type: ignore[arg-type] return AgentBackup(**d) # type: ignore[arg-type]
async def _spool_stream_to_tempfile(stream: AsyncIterator[bytes]) -> tuple[str, int]:
"""Spool an async byte stream into a temporary file and return (path, size)."""
fd, path = tempfile.mkstemp(prefix="owncloud_backup_", suffix=".tar")
os.close(fd)
size = 0
buf = bytearray()
try:
async for chunk in stream:
if not chunk:
continue
buf.extend(chunk)
size += len(chunk)
if len(buf) >= SPOOL_FLUSH_BYTES:
data = bytes(buf)
buf.clear()
await asyncio.to_thread(_write_bytes_to_file, path, data, append=True)
if buf:
await asyncio.to_thread(_write_bytes_to_file, path, bytes(buf), append=True)
return path, size
except Exception:
try:
os.remove(path)
except OSError:
pass
raise
def _write_bytes_to_file(path: str, data: bytes, *, append: bool) -> None:
mode = "ab" if append else "wb"
with open(path, mode) as f:
f.write(data)
f.flush()
class OwnCloudBackupAgent(BackupAgent): class OwnCloudBackupAgent(BackupAgent):
"""Backup agent storing backups in ownCloud via WebDAV.""" """Backup agent storing backups in ownCloud via WebDAV."""
@@ -61,21 +162,32 @@ class OwnCloudBackupAgent(BackupAgent):
backup: AgentBackup, backup: AgentBackup,
**kwargs: Any, **kwargs: Any,
) -> None: ) -> None:
"""Upload a backup + metadata sidecar.""" """Upload a backup + metadata sidecar (spooled for non-chunked PUT)."""
temp_path: str | None = None
try: try:
tar_name = _make_tar_name(backup.backup_id) tar_name = _make_tar_name(backup.backup_id)
meta_name = _make_meta_name(backup.backup_id) meta_name = _make_meta_name(backup.backup_id)
# 1) Upload tar stream # 1) Spool tar stream to temp file
stream = await open_stream() stream = await open_stream()
await self._client.put_stream(tar_name, stream) temp_path, size = await _spool_stream_to_tempfile(stream)
# 2) Upload metadata JSON (small) # 2) Upload tar file with Content-Length
meta_bytes = json.dumps(backup.to_dict(), ensure_ascii=False).encode("utf-8") await self._client.put_file(tar_name, temp_path, size)
# 3) Upload normalized metadata JSON
meta_dict = _agentbackup_to_dict(backup)
meta_bytes = json.dumps(meta_dict, ensure_ascii=False).encode("utf-8")
await self._client.put_bytes(meta_name, meta_bytes) await self._client.put_bytes(meta_name, meta_bytes)
except Exception as err: # noqa: BLE001 except Exception as err: # noqa: BLE001
raise BackupAgentError(f"Upload to ownCloud failed: {err}") from err raise BackupAgentError(f"Upload to ownCloud failed: {err}") from err
finally:
if temp_path:
try:
os.remove(temp_path)
except OSError:
pass
async def async_list_backups(self, **kwargs: Any) -> list[AgentBackup]: async def async_list_backups(self, **kwargs: Any) -> list[AgentBackup]:
"""List backups by reading metadata sidecars; fallback to tar stat if missing.""" """List backups by reading metadata sidecars; fallback to tar stat if missing."""
@@ -87,7 +199,6 @@ class OwnCloudBackupAgent(BackupAgent):
backups: list[AgentBackup] = [] backups: list[AgentBackup] = []
# 1) Load metadata sidecars (limited concurrency)
sem = asyncio.Semaphore(5) sem = asyncio.Semaphore(5)
async def fetch_meta(meta_name: str) -> None: async def fetch_meta(meta_name: str) -> None:
@@ -101,7 +212,6 @@ class OwnCloudBackupAgent(BackupAgent):
await asyncio.gather(*(fetch_meta(m) for m in meta_files)) await asyncio.gather(*(fetch_meta(m) for m in meta_files))
# 2) Fallback: tar without meta -> synthesize minimal AgentBackup
known_ids = {b.backup_id for b in backups} known_ids = {b.backup_id for b in backups}
for tar_name in tar_files: for tar_name in tar_files:
backup_id = tar_name.removeprefix(TAR_PREFIX).removesuffix(TAR_SUFFIX) backup_id = tar_name.removeprefix(TAR_PREFIX).removesuffix(TAR_SUFFIX)
@@ -109,13 +219,21 @@ class OwnCloudBackupAgent(BackupAgent):
continue continue
info = await self._client.stat(tar_name) info = await self._client.stat(tar_name)
d = { d = _normalize_backup_dict(
{
"backup_id": backup_id, "backup_id": backup_id,
"name": f"ownCloud backup ({backup_id})", "name": f"ownCloud backup ({backup_id})",
"date": info.get("modified_iso", ""), "date": info.get("modified_iso", ""),
"size": info.get("size", 0), "size": info.get("size", 0),
"protected": False, # conservative defaults:
"database_included": True,
"homeassistant_included": True,
"addons_included": False,
"folders_included": False,
"addons": [],
"folders": [],
} }
)
backups.append(_agentbackup_from_dict(d)) backups.append(_agentbackup_from_dict(d))
backups.sort(key=lambda b: str(b.date), reverse=True) backups.sort(key=lambda b: str(b.date), reverse=True)
@@ -129,7 +247,6 @@ class OwnCloudBackupAgent(BackupAgent):
meta_name = _make_meta_name(backup_id) meta_name = _make_meta_name(backup_id)
tar_name = _make_tar_name(backup_id) tar_name = _make_tar_name(backup_id)
# 1) Try meta
try: try:
raw = await self._client.get_bytes(meta_name) raw = await self._client.get_bytes(meta_name)
d = json.loads(raw.decode("utf-8")) d = json.loads(raw.decode("utf-8"))
@@ -139,16 +256,22 @@ class OwnCloudBackupAgent(BackupAgent):
except Exception as err: # noqa: BLE001 except Exception as err: # noqa: BLE001
raise BackupAgentError(f"Get backup metadata failed: {err}") from err raise BackupAgentError(f"Get backup metadata failed: {err}") from err
# 2) Fallback to tar stat
try: try:
info = await self._client.stat(tar_name) info = await self._client.stat(tar_name)
d = { d = _normalize_backup_dict(
{
"backup_id": backup_id, "backup_id": backup_id,
"name": f"ownCloud backup ({backup_id})", "name": f"ownCloud backup ({backup_id})",
"date": info.get("modified_iso", ""), "date": info.get("modified_iso", ""),
"size": info.get("size", 0), "size": info.get("size", 0),
"protected": False, "database_included": True,
"homeassistant_included": True,
"addons_included": False,
"folders_included": False,
"addons": [],
"folders": [],
} }
)
return _agentbackup_from_dict(d) return _agentbackup_from_dict(d)
except FileNotFoundError as err: except FileNotFoundError as err:
raise BackupNotFound(f"Backup not found: {backup_id}") from err raise BackupNotFound(f"Backup not found: {backup_id}") from err

View File

@@ -14,3 +14,6 @@ DATA_BACKUP_AGENT_LISTENERS = "backup_agent_listeners"
TAR_PREFIX = "ha_backup_" TAR_PREFIX = "ha_backup_"
TAR_SUFFIX = ".tar" TAR_SUFFIX = ".tar"
META_SUFFIX = ".json" META_SUFFIX = ".json"
# Spooling to temp file to avoid chunked WebDAV uploads
SPOOL_FLUSH_BYTES = 1024 * 1024 # 1 MiB

View File

@@ -1,10 +1,10 @@
{ {
"domain": "owncloud_backup", "domain": "owncloud_backup",
"name": "ownCloud Backup (WebDAV)", "name": "ownCloud Backup (WebDAV)",
"version": "0.1.0-alpha", "version": "0.2.0",
"documentation": "https://github.com/your-org/owncloud-backup-ha", "documentation": "https://github.com/bahmcloud/owncloud-backup-ha/",
"issue_tracker": "https://github.com/your-org/owncloud-backup-ha/issues", "issue_tracker": "https://github.com/bahmcloud/owncloud-backup-ha/issues",
"codeowners": [], "codeowners": ["@bahmcloud"],
"config_flow": true, "config_flow": true,
"integration_type": "service", "integration_type": "service",
"iot_class": "cloud_push", "iot_class": "cloud_push",

View File

@@ -2,6 +2,7 @@ from __future__ import annotations
import base64 import base64
import logging import logging
import os
import xml.etree.ElementTree as ET import xml.etree.ElementTree as ET
from collections.abc import AsyncIterator from collections.abc import AsyncIterator
from datetime import datetime, timezone from datetime import datetime, timezone
@@ -9,6 +10,7 @@ from email.utils import parsedate_to_datetime
from typing import Final from typing import Final
from urllib.parse import quote, urljoin from urllib.parse import quote, urljoin
import aiohttp
from aiohttp import ClientResponseError, ClientSession from aiohttp import ClientResponseError, ClientSession
from yarl import URL from yarl import URL
@@ -46,6 +48,11 @@ class WebDavClient:
] ]
self._cached_root: str | None = None self._cached_root: str | None = None
# Non-restrictive client timeouts for potentially long WebDAV operations
self._timeout_long = aiohttp.ClientTimeout(
total=None, connect=60, sock_connect=60, sock_read=None
)
def _auth_header(self) -> str: def _auth_header(self) -> str:
token = base64.b64encode(f"{self._username}:{self._password}".encode("utf-8")).decode("ascii") token = base64.b64encode(f"{self._username}:{self._password}".encode("utf-8")).decode("ascii")
return f"Basic {token}" return f"Basic {token}"
@@ -82,6 +89,7 @@ class WebDavClient:
b'<d:propfind xmlns:d="DAV:"><d:prop><d:resourcetype/></d:prop></d:propfind>' b'<d:propfind xmlns:d="DAV:"><d:prop><d:resourcetype/></d:prop></d:propfind>'
), ),
raise_for_status=True, raise_for_status=True,
timeout=self._timeout_long,
): ):
self._cached_root = root self._cached_root = root
return root return root
@@ -106,6 +114,7 @@ class WebDavClient:
base_folder, base_folder,
headers=self._headers({"Depth": "0"}), headers=self._headers({"Depth": "0"}),
raise_for_status=True, raise_for_status=True,
timeout=self._timeout_long,
): ):
return return
except ClientResponseError as err: except ClientResponseError as err:
@@ -127,7 +136,11 @@ class WebDavClient:
# exists? # exists?
try: try:
async with self._session.request( async with self._session.request(
"PROPFIND", url, headers=self._headers({"Depth": "0"}), raise_for_status=True "PROPFIND",
url,
headers=self._headers({"Depth": "0"}),
raise_for_status=True,
timeout=self._timeout_long,
): ):
return return
except ClientResponseError as err: except ClientResponseError as err:
@@ -135,7 +148,9 @@ class WebDavClient:
raise raise
# create # create
async with self._session.request("MKCOL", url, headers=self._headers()) as resp: async with self._session.request(
"MKCOL", url, headers=self._headers(), timeout=self._timeout_long
) as resp:
if resp.status in (201, 405): if resp.status in (201, 405):
return return
text = await resp.text() text = await resp.text()
@@ -154,6 +169,7 @@ class WebDavClient:
b'<d:propfind xmlns:d="DAV:"><d:prop><d:displayname/></d:prop></d:propfind>' b'<d:propfind xmlns:d="DAV:"><d:prop><d:displayname/></d:prop></d:propfind>'
), ),
raise_for_status=True, raise_for_status=True,
timeout=self._timeout_long,
) as resp: ) as resp:
body = await resp.text() body = await resp.text()
@@ -195,24 +211,44 @@ class WebDavClient:
async def put_bytes(self, name: str, data: bytes) -> None: async def put_bytes(self, name: str, data: bytes) -> None:
folder = await self._base_folder_url() folder = await self._base_folder_url()
url = self._file_url(folder, name) url = self._file_url(folder, name)
async with self._session.put(url, data=data, headers=self._headers(), raise_for_status=True): async with self._session.put(
url,
data=data,
headers=self._headers({"Content-Length": str(len(data))}),
raise_for_status=True,
timeout=self._timeout_long,
):
return return
async def put_stream(self, name: str, stream: AsyncIterator[bytes]) -> None: async def put_file(self, name: str, path: str, size: int) -> None:
"""Upload a local file with an explicit Content-Length (non-chunked)."""
folder = await self._base_folder_url() folder = await self._base_folder_url()
url = self._file_url(folder, name) url = self._file_url(folder, name)
async def gen(): # Ensure correct size if caller passes 0/unknown
async for chunk in stream: if size <= 0:
yield chunk try:
size = os.path.getsize(path)
except OSError:
size = 0
async with self._session.put(url, data=gen(), headers=self._headers(), raise_for_status=True): headers = {"Content-Length": str(size)} if size > 0 else {}
# aiohttp will stream file content; with Content-Length set, proxies are usually happier.
with open(path, "rb") as f:
async with self._session.put(
url,
data=f,
headers=self._headers(headers),
raise_for_status=True,
timeout=self._timeout_long,
):
return return
async def get_bytes(self, name: str) -> bytes: async def get_bytes(self, name: str) -> bytes:
folder = await self._base_folder_url() folder = await self._base_folder_url()
url = self._file_url(folder, name) url = self._file_url(folder, name)
async with self._session.get(url, headers=self._headers()) as resp: async with self._session.get(url, headers=self._headers(), timeout=self._timeout_long) as resp:
if resp.status == 404: if resp.status == 404:
raise FileNotFoundError(name) raise FileNotFoundError(name)
resp.raise_for_status() resp.raise_for_status()
@@ -221,7 +257,7 @@ class WebDavClient:
async def get_stream(self, name: str) -> AsyncIterator[bytes]: async def get_stream(self, name: str) -> AsyncIterator[bytes]:
folder = await self._base_folder_url() folder = await self._base_folder_url()
url = self._file_url(folder, name) url = self._file_url(folder, name)
resp = await self._session.get(url, headers=self._headers()) resp = await self._session.get(url, headers=self._headers(), timeout=self._timeout_long)
if resp.status == 404: if resp.status == 404:
await resp.release() await resp.release()
raise FileNotFoundError(name) raise FileNotFoundError(name)
@@ -239,7 +275,7 @@ class WebDavClient:
async def delete(self, name: str) -> None: async def delete(self, name: str) -> None:
folder = await self._base_folder_url() folder = await self._base_folder_url()
url = self._file_url(folder, name) url = self._file_url(folder, name)
async with self._session.delete(url, headers=self._headers()) as resp: async with self._session.delete(url, headers=self._headers(), timeout=self._timeout_long) as resp:
if resp.status == 404: if resp.status == 404:
raise FileNotFoundError(name) raise FileNotFoundError(name)
if resp.status in (200, 202, 204): if resp.status in (200, 202, 204):
@@ -267,6 +303,7 @@ class WebDavClient:
url, url,
headers=self._headers({"Depth": "0", "Content-Type": "application/xml; charset=utf-8"}), headers=self._headers({"Depth": "0", "Content-Type": "application/xml; charset=utf-8"}),
data=body, data=body,
timeout=self._timeout_long,
) as resp: ) as resp:
if resp.status == 404: if resp.status == 404:
raise FileNotFoundError(name) raise FileNotFoundError(name)