Installation

pip install hyperbrowser

Usage

from hyperbrowser import Hyperbrowser

client = Hyperbrowser(api_key="your_api_key")

Create Session

Creates a new browser session.

Method: client.create_session(): SessionDetail

Endpoint: POST /api/session

Parameters:

  • CreateSessionParams:
    • use_stealth?: bool - Use stealth mode.
    • use_proxy?: bool - Use proxy.
    • proxy_server?: string - Proxy server URL to route the session through.
    • proxy_server_username?: string - Username for proxy server authentication.
    • proxy_server_password?: string - Password for proxy server authentication.
    • proxy_country?: Country - Desired proxy country.
    • operating_systems?: OperatingSystem[] - Preferred operating systems for the session. Possible values are:
      • OperatingSystem.WINDOWS
      • OperatingSystem.ANDROID
      • OperatingSystem.MACOS
      • OperatingSystem.LINUX
      • OperatingSystem.IOS
    • device?: ("desktop" | "mobile")[] - Preferred device types. Possible values are:
      • "desktop"
      • "mobile"
    • platform?: Platform[] - Preferred browser platforms. Possible values are:
      • Platform.CHROME
      • Platform.FIREFOX
      • Platform.SAFARI
      • Platform.EDGE
    • locales?: ISO639_1[] - Preferred locales (languages) for the session. Use ISO 639-1 codes.
    • screen?: ScreenConfig - Screen configuration for the session.
      • width: number - Screen width.
      • height: number - Screen height.
    • solve_captchas: bool - Solve captchas.
    • adblock: bool - Block ads.
    • trackers: bool - Block trackers.
    • annoyances: bool - Block annoyances.

Response: SessionDetail

Example:

session = client.create_session()
print(session.id)

Get Session Details

Retrieves details of a specific session.

Method: client.get_session(id: str): SessionDetail

Endpoint: GET /api/session/{id}

Parameters:

  • id: str - Session ID

Response: SessionDetail

Example:

session = client.get_session('182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e')
print(session.session_url)

List Sessions

Retrieves a list of all sessions with optional filtering.

Method: client.get_session_list(params: SessionListParams): SessionListResponse

Endpoint: GET /api/sessions

Parameters:

  • SessionListParams:
    • status?: "active" | "closed" | "error" - Filter sessions by status
    • page?: int - Page number (default: 1, min: 1)
    • per_page?: int - Results per page (default: 20, min: 1, max: 100)

Response: SessionListResponse

Example:

params = SessionListParams(status="active", page=1, per_page=50)
response = client.get_session_list(params)
print(f"Total sessions: {response.total_count}")
print(f"More pages available: {response.has_more}")

Stop Session

Stops a running session.

Method: client.stop_session(id: str): BasicResponse

Endpoint: PUT /api/session/{id}/stop

Parameters:

  • id: str - Session ID

Response: BasicResponse

Example:

success = client.stop_session('182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e')
print(f"Session stopped: {success}")

Start Scrape Job

Starts a scrape job for a given URL.

Method: client.start_scrape_job(params: StartScrapeJobParams): StartScrapeJobResponse

Endpoint: POST /api/scrape

Parameters:

  • StartScrapeJobParams:
    • url: str - URL to scrape
    • use_proxy: bool - Use proxy
    • solve_captchas: bool - Solve captchas

Response: StartScrapeJobResponse

Example:

response = client.start_scrape_job(url="https://example.com")
print(response.job_id)

Get Scrape Job

Retrieves details of a specific scrape job.

Method: client.get_scrape_job(id: str): ScrapeJobResponse

Endpoint: GET /api/scrape/{id}

Parameters:

  • id: str - Scrape job ID

Response: ScrapeJobResponse

Example:

response = client.get_scrape_job('182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e')
print(response.status)

Start Crawl Job

Starts a crawl job for a given URL.

Method: client.start_crawl_job(params: StartCrawlJobParams): StartCrawlJobResponse

Endpoint: POST /api/crawl

Parameters:

  • StartCrawlJobParams:
    • url: str - URL to crawl
    • max_pages: int - Maximum number of pages to crawl
    • follow_links: bool - Follow links
    • exclude_patterns: List[str] - Patterns to exclude from the crawl
    • include_patterns: List[str] - Patterns to include in the crawl
    • use_proxy: bool - Use proxy
    • solve_captchas: bool - Solve captchas

Response: StartCrawlJobResponse

Example:

response = client.start_crawl_job(url="https://example.com")
print(response.job_id)

Get Crawl Job

Retrieves details of a specific crawl job.

Method: client.get_crawl_job(id: str): CrawlJobResponse

Endpoint: GET /api/crawl/{id}

Parameters:

  • id: str - Crawl job ID

Response: CrawlJobResponse

Example:

response = client.get_crawl_job('182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e')
print(response.status)

Types

BasicResponse

class BasicResponse(BaseModel):
    success: bool

Session

class Session(BaseModel):
    id: str
    team_id: str
    status: SessionStatus  # "active" | "closed" | "error"
    created_at: datetime
    updated_at: datetime
    start_time: Optional[int] = None
    end_time: Optional[int] = None
    duration: Optional[int] = None
    session_url: str

SessionDetail

class SessionDetail(Session):
    websocket_url: Optional[str] = None

SessionListParams

class SessionListParams(BaseModel):
    status: Optional[SessionStatus] = None
    page: int = 1  # minimum: 1
    per_page: int = 20  # minimum: 1, maximum: 100

SessionListResponse

class SessionListResponse(BaseModel):
    sessions: List[Session]
    total_count: int
    page: int
    per_page: int

    @property
    def has_more(self) -> bool: ...

    @property
    def total_pages(self) -> int: ...

CreateSessionParams

class CreateSessionParams(BaseModel):
    use_stealth: bool = False
    use_proxy: bool = False
    proxy_server: Optional[str] = None
    proxy_server_password: Optional[str] = None
    proxy_server_username: Optional[str] = None
    proxy_country: Optional[Country] = "US"
    operating_systems: Optional[List[OperatingSystem]] = None
    device: Optional[List[Literal["desktop", "mobile"]]] = None
    platform: Optional[List[Platform]] = None
    locales: List[ISO639_1] = ["en"]
    screen: Optional[ScreenConfig] = None
    solve_captchas: bool = False
    adblock: bool = False
    trackers: bool = False
    annoyances: bool = False

ScreenConfig

class ScreenConfig(BaseModel):
    width: int = 1280
    height: int = 720

StartScrapeJobParams

class StartScrapeJobParams(BaseModel):
    url: str
    use_proxy: bool = False
    solve_captchas: bool = False

StartScrapeJobResponse

class StartScrapeJobResponse(BaseModel):
    job_id: str

ScrapeJobMetadata

class ScrapeJobMetadata(BaseModel):
    title: str
    description: str
    robots: str
    og_title: str
    og_description: str
    og_url: str
    og_image: str
    og_locale_alternate: list[str]
    og_site_name: str
    source_url: str

ScrapeJobData

class ScrapeJobData(BaseModel):
    metadata: ScrapeJobMetadata
    markdown: str

ScrapeJobResponse

class ScrapeJobResponse(BaseModel):
    status: ScrapeJobStatus
    error: Optional[str] = None
    data: Optional[ScrapeJobData] = None

StartCrawlJobParams

class StartCrawlJobParams(BaseModel):
    url: str
    max_pages: int
    follow_links: bool = True
    exclude_patterns: List[str] = []
    include_patterns: List[str] = []
    use_proxy: bool = False
    solve_captchas: bool = False

StartCrawlJobResponse

class StartCrawlJobResponse(BaseModel):
    job_id: str

CrawledPageMetadata

class CrawledPageMetadata(BaseModel):
    title: str
    description: str
    robots: str
    og_title: str
    og_description: str
    og_url: str
    og_image: str
    og_locale_alternate: List[str]
    og_site_name: str
    source_url: str

CrawledPage

class CrawledPage(BaseModel):
    metadata: CrawledPageMetadata
    markdown: str
    url: str

GetCrawlJobParams

class GetCrawlJobParams(BaseModel):
    page: Optional[int] = None
    batch_size: Optional[int] = 10

CrawlJobResponse

class CrawlJobResponse(BaseModel):
    status: CrawlJobStatus
    batch_size: Optional[int] = Field(
        default=10, ge=1, le=50, serialization_alias="batchSize"
    )
    total_crawled_pages: int
    total_page_batches: int
    current_page_batch: int
    batch_size: int

SessionStatus

SessionStatus = Literal["active", "closed", "error"]

Country

Country = Literal["AD", "AE", "AF", ...other country codes]

OperatingSystem

OperatingSystem = Literal["windows", "macos", "linux", "android", "ios"]

Platform

Platform = Literal["chrome", "firefox", "safari", "edge"]

ISO639_1

ISO639_1 = Literal["en", "es", "fr", ...other language codes]

ScrapeJobStatus

ScrapeJobStatus = Literal["pending", "running", "completed", "failed"]

CrawlJobStatus

CrawlJobStatus = Literal["pending", "running", "completed", "failed"]

Client Initialization

Example:

from hyperbrowser import Hyperbrowser

client = Hyperbrowser(
    api_key="your-api-key",
    base_url="https://app.hyperbrowser.ai"  # optional
)