Skip to content

Commit

Permalink
Merge pull request #1562 from Logflare/staging
Browse files Browse the repository at this point in the history
Release v1.3.6
  • Loading branch information
filipecabaco authored Jun 9, 2023
2 parents 6d3b33a + 77b5a99 commit da22ba9
Show file tree
Hide file tree
Showing 66 changed files with 630 additions and 862 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,6 @@ Sign up at https://logflare.app.
## Learn more

- [Official website](https://logflare.app)
- [Guides](https://logflare.app/guides) and [documentation](https://docs.logflare.app)
- [Documentation](https://docs.logflare.app)
- <[email protected]> or <[email protected]>
- [Developer Docs](./DEVELOPMENT.md)
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.3.5
1.3.6
9 changes: 9 additions & 0 deletions docs/docs.logflare.com/docs/alerts/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{
"label": "Alerts",
"collapsible": true,
"collapsed": true,
"position": 6,
"link": {
"type": "generated-index"
}
}
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
21 changes: 21 additions & 0 deletions docs/docs.logflare.com/docs/alerts/slack/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
title: Slack
---

# Setup the Logflare Slack App

Logflare's Slack integration allows you to get log event alerts in your Slack channels.

## Add to Slack

Slack channels are tied to Logflare sources. When you edit a Logflare source you can see options for alerts. Simply click the Add to Slack button and follow the prompts.

![Edit source](./edit-source.png)

![Add to Slack](./add-to-slack.png)

## Get Alerts

Once you have your Logflare source connected to a Slack channel you will get alerts about events in that source when they come in.

![Slack notifications example](./slack-notifications-example.png)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
91 changes: 88 additions & 3 deletions docs/docs.logflare.com/docs/backends/bigquery/index.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
toc_max_heading_level: 3
---

# BigQuery

Logflare natively supports the storage of log events to BigQuery. Ingested logs are **streamed** into BigQuery, and each source is mapped to a BigQuery table.
Expand All @@ -24,6 +28,25 @@ For metered plan users, if the TTL is not set, the BigQuery table will default t

For users on the Free plan, the maximum retention is **3 days**.

#### Deep Dive: Table Partitioning

Table partitioning effectively splits a BigQuery table into many smaller tables.
When using partitioned tables, BigQuery storage is effectively half priced when the partitioned table is older than 90 days. When a table has not been modified in 90 days, BigQuery will only charge half the normal rate.

When partitioned over time (which Logflare manages automatically), we are able to benefit from the discount by separating out the older and less-queried smaller tables, reducing total effective storage costs.

Furthermore, by partitioning a table, we can then limit queries to scan data across only selected partitioned column/tables, saving you more money and making your queries even more responsive.

However, the caveat of this is that BigQuery's streaming buffer is not include in partitioned queries by default. This would affect queries across the partitioned tables and would result in a lag time before the data will become visible in the partitioned tables.

Should you need to query against the streaming buffer directly, you can use the following query ([source](https://stackoverflow.com/questions/41864257/how-to-query-for-data-in-streaming-buffer-only-in-bigquery)) to do so:

```sql
SELECT fields FROM `dataset.partitioned_table_name` WHERE _PARTITIONTIME IS NULL
```

You can read more about partitioned tables in the official Google Cloud [documentation](https://cloud.google.com/bigquery/docs/partitioned-tables).

## Logflare-Managed BigQuery

Logflare free and metered users will not need to manage BigQuery settings and permissions, and will have access to their data via their registered e-mail address.
Expand All @@ -38,13 +61,15 @@ The differences in BigQuery behavior for the two plans are as follows:

## Bring Your Own Backend (BYOB)

You can also Bring Your Own Backend by allowing Logflare to manage a GCP project's BigQuery.
You can also Bring Your Own Backend by allowing Logflare to manage your GCP project's BigQuery.

This allows you to retain data beyond the metered plan's 90 days, as well as integrating the BigQuery tables managed by Logflare into your BigQuery-backend data warehouse.

Furthermore, you will have complete control over storage and querying costs as you will be billed directly by Google Cloud, while Logflare will only handle the ingestion pipeline.

### Setting Up Your Own BigQuery Backend

:::warn Enable Billing for Project
:::warning Enable Billing for Project
Enable a Google Cloud Platform billing account with payment information or [we won't be able to insert into your BigQuery table!](#ingestion-bigquery-streaming-insert-error)
:::

Expand Down Expand Up @@ -96,7 +121,7 @@ The steps for setting up self-hosted Logflare requires different BigQuery config

You can directly execute SQL queries in BigQuery instead of through the Logflare UI. This would be helpful for generating reports that require aggregations, or to perform queries across multiple BigQuery tables.

When referencing Logflare-managed BigQuery tables, you will need to reference the table by the source's UUID. If you are crafting the query within [Logflare Endpoints](/endpoints), the table name resolution is handled automatically for you.
When referencing Logflare-managed BigQuery tables, you will need to reference the table by the source's UUID. If you are crafting the query within [Logflare Endpoints](/concepts/endpoints), the table name resolution is handled automatically for you.

### Unnesting Repeated Records

Expand Down Expand Up @@ -127,3 +152,63 @@ Access Denied: BigQuery BigQuery: Streaming insert is not allowed in the free ti
```

To resolve this error, you will need enable billing for your project through the GCP console.

## Data Studio (Looker) Integration

When you log into the Logflare service with your Google account, we will automatically provide you access to the underlying Big Query tables associated with all your sources. This allows you to create visualizations and reports in Data Studio.

Looker Studio has extensive documentation and tutorials to help you [learn more](https://support.google.com/looker-studio/#topic=6267740).

### Setup Steps

#### Step 1: Open Data Studio and Add a Data Source

![Open Data Studio](./open-data-studio.png)

#### Step 2: Select the BigQuery Connector

![Select the BigQuery Connector](./bigquery-connector.png)

#### Step 3: Find your Logflare Source

The BigQuery table name will match the ID of the source as per the dashboard.

![Find your Logflare Source](./find-logflare-source.png)

#### Step 4: Select Partitioned Table Time Dimension

Selecting this option will make your reports faster and let you effectively use the date range picker in a report.
It will also help with ongoing BigQuery costs associated with queries.

It may take up to 15 minutes for ingested data to flow into Data Studio immediately after report creation.

![Select Partitioned Table Time Dimension](./partitioned-table-time-dimension.png)

#### Step 5: Connect to the Data

![Connect to the da ta](./connect-data-studio.png)

#### Step 6: Set Timestamp Type to Date Hour (Optional)

This allows you to see more fine grained fluctuations in your data.

You can also mix hourly and daily data in your reports using `duplicate` on a field and configuring it to either `Date Hour` or `Date`.

![Set the Timestamp type](./data-studio-timestamp-type.png)

#### Step 7: Create the Report and Configure Settings

Click on "Create Report" to finialize your initial report.
![Create the Data Studio Report](./data-studio-create-report.png)

You can also configure the report further by navigating to Report Settings

![Configure Report Settings](./data-studio-report-settings.png)

You should also configure the default data source date range to optimize your query cost.

![Configure the data source date range](./data-studio-configure-date-range.png)

#### Step 8: Create Your First Charts

![Create your first chart](data-studio-create-chart.png)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
9 changes: 9 additions & 0 deletions docs/docs.logflare.com/docs/case-studies/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{
"label": "Case Studies",
"collapsible": false,
"collapsed": false,
"position": 7,
"link": {
"type": "generated-index"
}
}
15 changes: 15 additions & 0 deletions docs/docs.logflare.com/docs/case-studies/supabase/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
title: Supabase
---

# How Supabase Uses Logflare

![Supabase acquires Logflare](./supabase-acquires-logflare.png)

Logflare was [acquired by Supabase](https://supabase.com/blog/supabase-acquires-logflare?utm_source=logflare-site&utm_medium=referral&utm_campaign=logflare-acquired) back in 2021, and has since been ingested billions of log events for Supabase project stacks.

Logflare is an integrated part of the Supabase stack, powering logging for each part of the stack. Logflare's ingestion capabilities ensures that log events flow quickly into BigQuery. Since acquistion, the Logflare team has been working on building out the [self-hosting capabilities](/self-hosting) of Logflare, ensuring that Logflare can eventually become integrated as part of the Supabase stack to power the charting and logging experience.

[Logflare Endpoints](/concepts/endpoints) allows the querying of said logging data. Specifically, [query sandboxing](/concepts/endpoints#query-sandboxing) allows Supabase users to formulate their own SQL queries across their logging data, allowing complex and custom queries beyond what Supabase can provide out of the box. These queries are executed in the [Supabase Logs Explorer](https://supabase.com/docs/guides/platform/logs#logs-explorer), part of the logging experience in the Supabase Studio. Furthermore, the fully interactive log filtering user interface within the Supabase Logs UI dynamically builds SQL and allows users to eject into editing raw SQL queries to tweak and customize it to their specific debugging requirements.

[Click here for full write up of how Logflare is the Supabase Logs server](https://supabase.com/blog/supabase-logs-self-hosted)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
10 changes: 10 additions & 0 deletions docs/docs.logflare.com/docs/concepts/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{
"label": "Introduction",
"collapsible": false,
"collapsed": false,
"position": 1,
"link": {
"type": "doc",
"id": "concepts/index"
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,16 @@ Logflare Endpoints creates GET HTTP API endpoints that executes a SQL query and
This feature is in Alpha stage and is only available for private preview.
:::

## API Endpoints

There are two ways to query a Logflare Endpoint, via the Endpoint UUID or via the endpoint's name:

```
GET https://api.logflare.app/api/endpoints/query/9dd9a6f6-8e9b-4fa4-b682-4f2f5cd99da3
GET https://api.logflare.app/api/endpoints/query/name/my.custom.endpoint
```

## Crafting a Query

Queries will be passed to the underlying backend to perform the querying.
Expand Down Expand Up @@ -89,4 +99,4 @@ When configured, the cache will be automatically updated at the set interval, pe

## Security

Endpoints are unsecure by default. However, you can generate access tokens to secure the API endpoints.
Endpoints are unsecure by default. However, you can generate [access tokens](/concepts/access-tokens) to secure the API endpoints.
110 changes: 0 additions & 110 deletions docs/docs.logflare.com/docs/concepts/index.md

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -16,22 +16,36 @@ Logflare is a log ingestion and querying engine that allows the storage and quer
light: "/img/dashboard.png",
dark: "/img/dashboard-dark.png",
}}
style={{maxHeight: 800, maxWidth: "100%"}}
style={{ maxHeight: 800, maxWidth: "100%" }}
/>

Logflare has recently been [acquired by Supabase](https://supabase.com/blog/supabase-acquires-logflare), however the service still operates and powers the [Supabase Platform's](https://supabase.com/) logging and observability features.

## Features
## Features and Motivations

### Scalable Storage and Querying Costs

Columnar databases allows for fast analysis while providing compact storage mechanisms, allowing you to store orders of magnitude more event data at the same cost that our competitors offer. Furthermore, the costs scale predictably according to the amount of data stored, allowing you to have a peace of mind when managing billing and infrastructure costs.

Lucene-based log management systems worked well before the advent of scalable database options, but starts to get prohibitively expensive beyond a certain scale and volume, and the data would need to be shipped elsewhere to be further analyzed over the long term.

BigQuery in particular was build in-house by Google to store and analyze petabytes of event data, and Logflare leverages it to allow you to store many magnitudes more data.

### Bring Your Own Backend

Logflare can integrate with your very own backends, allowing Logflare to only manage the pipeline and throughput of log events. This ensures maximum flexibility for storing sensitive data and managing your own storage and querying costs.

Bringing your own backend gives you complete control over your storage and querying costs.

### Schema Management

When events are ingested, the backend schema is automatically managed by Logflare, allowing you to insert JSON payloads without having to worry about data type changes.

When new fields are sent to Logflare, the data type is detected automatically and merged into the current table schema.

### Querying UI and API
Logflare provides a fully featured [querying interface](./querying). [Logflare Endpoints](./endpoints.md) provides a programmatic interface for executing SQL queries on your stored log events, allowing you to analyze and leverage your event data in downstream applications.

Logflare provides a fully featured [querying interface](./concepts/ingestion#querying). [Logflare Endpoints](/concepts/endpoints.md) provides a programmatic interface for executing SQL queries on your stored log events, allowing you to analyze and leverage your event data in downstream applications.

---

Expand Down Expand Up @@ -64,4 +78,4 @@ Logflare provides a fully featured [querying interface](./querying). [Logflare E

4. **Check the Source**

Congratulations, your first log event should be successfully POST-ed to Logflare! You can then search and filter the source for specific events using the [Logflare Query Language](./querying#logflare-query-language-lql).
Congratulations, your first log event should be successfully POST-ed to Logflare! You can then search and filter the source for specific events using the [Logflare Query Language](/concepts/lql).
Loading

0 comments on commit da22ba9

Please sign in to comment.