Skip to main content

Confluent Cloud

Confluent Cloud is a fully-managed, cloud-native data streaming platform built on Apache Kafka. It provides real-time data pipelines, stream processing, and event-driven architecture capabilities for building modern data infrastructure. Confluent Cloud handles the operational complexity of running Kafka at scale, allowing you to focus on building streaming applications.

Key Features

  • Kafka Clusters: Fully-managed Apache Kafka clusters with multiple tiers (Basic, Standard, Dedicated) for different workload requirements
  • Schema Registry: Centralized schema management with support for Avro, JSON Schema, and Protobuf for data governance
  • ksqlDB: SQL-based stream processing for building real-time applications without writing code
  • Apache Flink: Advanced stream processing with complex event processing, windowing, and stateful transformations
  • Kafka Connect: 100+ pre-built connectors for integrating with databases, cloud storage, and SaaS applications
  • Stream Sharing: Securely share streaming data with partners and customers across organizations
  • Multi-Cloud: Deploy across AWS, Google Cloud, and Azure with consistent APIs and management

Authentication Types

Confluent Cloud supports 1 authentication method:

  • API Keys - Cloud API Keys for management operations using HTTP Basic Authentication
    • Pros: Simple setup, secure, supports service accounts, fine-grained access control
    • Cons: Requires managing key rotation
tip

This connector uses Cloud API Keys for accessing Confluent Cloud's management APIs (creating clusters, topics, schemas, etc.). These are different from Resource API Keys which are used for producing/consuming messages directly to/from Kafka clusters.

Understanding API Key Types

Confluent Cloud uses two categories of API keys:

  1. Cloud API Keys (used by this connector):

    • Grant access to management APIs (Environments, Clusters, Topics, ACLs, Schema Registry, Connectors, etc.)
    • Scoped to the entire organization or specific resources
    • Created through the Confluent Cloud Console or CLI
    • Used for administrative operations and infrastructure management
  2. Resource API Keys (not used by this connector):

    • Grant access to specific resources (Kafka clusters, Schema Registry clusters, ksqlDB applications)
    • Used by applications to produce/consume messages, register schemas, run queries
    • Created for specific Kafka clusters or services
    • Inherit permissions from ACLs assigned to the owning principal

Setting up Cloud API Keys

Follow these steps to create a Cloud API Key for the Confluent connector:

Step 1: Access API Keys Section

  1. Log in to Confluent Cloud Console

  2. Click on your user profile icon in the top-right corner

  3. Select Cloud API keys from the dropdown menu

Step 2: Create Cloud API Key

  1. Click Add key or Create key
  1. In the "Create API key" dialog:
    • Scope: Select the scope for your API key:
      • My account - Keys scoped to your user account (for personal use)
      • Service account - Keys scoped to a service account (recommended for production)
    • If using a service account, select an existing service account or create a new one
  1. Click Next

  2. Optionally, add a Description to identify the key's purpose (e.g., "Webrix MCP Integration")

  1. Click Create or Download and continue

Step 3: Save Your Credentials

IMPORTANT: The API Key Secret is only shown once and cannot be retrieved later.

  1. You'll see two values:
    • API Key (also called Key ID) - Example: ABCD1234EFGH5678
    • API Secret - Example: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  1. Save both values immediately in a secure location:

    • Copy the API Key (you'll use this as the Key ID in Webrix)
    • Copy the API Secret (you'll use this as the Key Secret in Webrix)
  2. Click I have saved my key to confirm

danger

Store your API Secret securely! It provides full access to your Confluent Cloud resources within the key's scope. Do not share it or commit it to version control.

Step 4: Configure in Webrix

  1. In the Webrix connector settings, select API Key as the authentication method

  2. Enter your credentials:

    • Key ID: Paste the API Key from Step 3
    • Key Secret: Paste the API Secret from Step 3
  3. Click Save Changes or Connect

Your Confluent Cloud connector is now ready to use!

For production use cases, it's highly recommended to use a service account instead of personal user accounts:

Why Use Service Accounts?

  • Continuity: Service accounts persist even when team members leave
  • Security: Better separation of concerns and audit trails
  • Access Control: Fine-grained permissions through role-based access control (RBAC)
  • Automation: Designed for machine-to-machine authentication

Creating a Service Account

  1. In Confluent Cloud Console, go to Accounts & accessService accounts

  2. Click Add service account

  3. Enter a Name (e.g., "webrix-mcp-integration") and optional Description

  4. Click Create

  5. Assign appropriate roles to the service account:

    • OrganizationAdmin - Full access to all resources (use cautiously)
    • EnvironmentAdmin - Admin access to specific environments
    • CloudClusterAdmin - Manage Kafka clusters
    • Custom roles for specific permissions
  6. Create Cloud API Keys for this service account following the steps above

Common Use Cases

Data Pipeline Management

  • Create and manage Kafka topics for real-time data streams
  • Configure retention policies and partition counts for optimal performance
  • Monitor consumer group lag to ensure timely data processing
  • Set up ACLs for secure multi-tenant access

Schema Governance

  • Register Avro, JSON, or Protobuf schemas to enforce data contracts
  • Validate schema compatibility to prevent breaking changes
  • Discover and audit schemas across your organization
  • Version schemas for controlled evolution

Integration & Connectors

  • Deploy pre-built connectors for databases (PostgreSQL, MySQL, MongoDB)
  • Set up cloud storage integrations (S3, GCS, Azure Blob)
  • Connect to SaaS applications (Salesforce, Snowflake, Elasticsearch)
  • Monitor and manage connector status and performance

Stream Processing

  • Create ksqlDB clusters for SQL-based stream processing
  • Deploy Apache Flink compute pools for advanced analytics
  • Build materialized views and real-time dashboards
  • Implement complex event processing and data enrichment

Access Management

  • Create service accounts for production applications
  • Generate API keys for different environments and use cases
  • Configure ACLs for fine-grained access control
  • Manage user permissions and role assignments

Cost & Resource Optimization

  • Monitor billing and usage across environments
  • Track costs by cluster, connector, and resource type
  • Optimize cluster sizing based on throughput requirements
  • Identify and clean up unused resources

API Capabilities

This connector provides access to the following Confluent Cloud APIs:

Organization & Environment Management

  • Create and manage environments for organizing resources
  • View organization details and settings

Kafka Cluster Operations

  • List, create, update, and manage Kafka clusters
  • Create and configure topics with custom settings
  • Manage partitions and view partition distribution
  • Produce messages to topics for testing
  • Monitor consumer groups and lag

Access Control

  • Create and manage ACLs for secure resource access
  • Configure permissions for users and service accounts
  • List and audit access control policies

Configuration Management

  • View and update topic configurations
  • Manage retention, cleanup policies, and compression
  • Configure broker-level settings

Schema Registry

  • Register and manage schemas (Avro, JSON, Protobuf)
  • List subjects and schema versions
  • Lookup schemas by ID or content
  • Enforce schema compatibility rules

Kafka Connect

  • List available connector plugins
  • Create and configure connectors
  • Monitor connector status and health
  • Validate connector configurations

Identity & Access Management

  • Create and manage service accounts
  • Generate and revoke API keys
  • List keys by owner or resource

Stream Processing

  • Create and manage ksqlDB clusters
  • Deploy Apache Flink compute pools
  • Configure stream processing resources

Networking

  • List and manage network configurations
  • View VPC/VNet settings and connectivity options

Billing & Cost Management

  • Retrieve cost data by resource and time period
  • Monitor usage and spending trends

Troubleshooting

Authentication Errors

Error: 401 Unauthorized or Invalid credentials

Cause: Incorrect API Key or Secret, or key has been deleted/revoked.

Solution:

  1. Verify you're using the correct Key ID and Secret
  2. Check that the API key hasn't been deleted in Confluent Cloud Console
  3. Ensure you're using a Cloud API Key, not a Resource API Key
  4. If using a service account, verify it still exists and has proper permissions

Permission Errors

Error: 403 Forbidden or Access denied

Cause: API key owner lacks necessary permissions for the requested operation.

Solution:

  1. Check the roles assigned to the user or service account that owns the API key
  2. Ensure the service account has appropriate roles (OrganizationAdmin, EnvironmentAdmin, etc.)
  3. For resource-specific operations, verify the principal has ACLs on the resource
  4. Review the Confluent Cloud RBAC documentation

Resource Not Found

Error: 404 Not Found or Resource does not exist

Cause: The specified resource ID doesn't exist or is in a different environment.

Solution:

  1. Verify the resource ID is correct (cluster IDs start with lkc-, environments with env-, etc.)
  2. Check that the resource is in the expected environment
  3. List resources to find the correct ID
  4. For deleted resources, they cannot be accessed even with valid IDs

Rate Limiting

Error: 429 Too Many Requests

Cause: You've exceeded the API rate limits.

Solution:

  1. Implement exponential backoff and retry logic in your application
  2. Reduce the frequency of API calls
  3. Use pagination for list operations instead of fetching all resources at once
  4. Consider caching results for read-heavy operations
  5. Contact Confluent support if you need higher rate limits

Cluster Creation Issues

Error: Cluster creation fails or times out

Cause: Resource limitations, region availability, or configuration issues.

Solution:

  1. Verify the selected cloud provider and region support your cluster type
  2. Check your organization's quotas and limits
  3. Ensure network CIDR blocks don't overlap with existing networks
  4. For Dedicated clusters, verify the environment has a network configured
  5. Review cluster creation logs in the Confluent Cloud Console

Schema Registry Compatibility Errors

Error: Schema being registered is incompatible with an earlier schema

Cause: The new schema violates compatibility rules.

Solution:

  1. Check the subject's compatibility level (BACKWARD, FORWARD, FULL, NONE)
  2. Review what fields were added, removed, or changed
  3. Use the "Test Compatibility" feature before registering
  4. Consider using NONE compatibility for development/testing
  5. Understand schema evolution rules

Best Practices

  1. Use Service Accounts: Always use service accounts instead of personal accounts for production applications

  2. Rotate API Keys: Implement a regular key rotation policy (every 90 days recommended)

  3. Least Privilege: Grant the minimum permissions necessary for each service account

  4. Environment Separation: Use separate environments for development, staging, and production

  5. Monitor Costs: Regularly review billing data to optimize resource usage

  6. Schema Management: Always use Schema Registry for production data to ensure compatibility

  7. ACL Management: Implement fine-grained ACLs rather than using overly permissive service accounts

  8. Tag Resources: Use descriptive names and add metadata to resources for better organization

Additional Resources

Support

For issues specific to Confluent Cloud:

For issues with the Webrix connector:

  • Contact Webrix support through your usual channels