Incidents | Layer Financial Technologies, Inc. Incidents reported on status page for Layer Financial Technologies, Inc. https://status.layerfi.com/ https://d1lppblt9t2x15.cloudfront.net/logos/a08de90c136dc37bb3b52c57477eb6d2.png Incidents | Layer Financial Technologies, Inc. https://status.layerfi.com/ en api.layerfi.com recovered https://status.layerfi.com/ Sat, 31 Jan 2026 00:38:50 +0000 https://status.layerfi.com/#309c438ac6d02f1e87afe8b06cf186aaaaafc90e69d8cea6c0d39765c2667af9 api.layerfi.com recovered api.layerfi.com went down https://status.layerfi.com/ Sat, 31 Jan 2026 00:35:53 +0000 https://status.layerfi.com/#309c438ac6d02f1e87afe8b06cf186aaaaafc90e69d8cea6c0d39765c2667af9 api.layerfi.com went down sandbox.layerfi.com recovered https://status.layerfi.com/ Sat, 31 Jan 2026 00:08:50 +0000 https://status.layerfi.com/#1e2777e0d0e7b516ba802180ec03644c1ed607ae2e39aad1d17bd94005beca5f sandbox.layerfi.com recovered sandbox.layerfi.com went down https://status.layerfi.com/ Sat, 31 Jan 2026 00:05:45 +0000 https://status.layerfi.com/#1e2777e0d0e7b516ba802180ec03644c1ed607ae2e39aad1d17bd94005beca5f sandbox.layerfi.com went down api.layerfi.com recovered https://status.layerfi.com/ Wed, 14 Jan 2026 22:12:04 +0000 https://status.layerfi.com/#d8b94fa19703474e2c9a5592950e0faffc5d451be29421b496af7cc0d2082f9b api.layerfi.com recovered api.layerfi.com went down https://status.layerfi.com/ Wed, 14 Jan 2026 22:08:49 +0000 https://status.layerfi.com/#d8b94fa19703474e2c9a5592950e0faffc5d451be29421b496af7cc0d2082f9b api.layerfi.com went down api.layerfi.com recovered https://status.layerfi.com/ Wed, 14 Jan 2026 21:44:47 +0000 https://status.layerfi.com/#a7bb154088ed3c237125e0d7c0dbcb5e3e14fba29ec10471144070ad60338c2b api.layerfi.com recovered api.layerfi.com went down https://status.layerfi.com/ Wed, 14 Jan 2026 21:41:59 +0000 https://status.layerfi.com/#a7bb154088ed3c237125e0d7c0dbcb5e3e14fba29ec10471144070ad60338c2b api.layerfi.com went down Client Dashboard Intermittent Degradation https://status.layerfi.com/incident/711842 Wed, 30 Jul 2025 05:20:00 -0000 https://status.layerfi.com/incident/711842#9aa33554fd6bfc7f26cb699b8cbba6ed00e304ca51faef106f05cb9a04614d10 # Summary On July 29, Layer engineers discovered an ongoing issue that intermittently caused customer dashboards to hang indefinitely while loading data. The Layer team identified that the issue has been intermittently occurring for 2.5 weeks, affecting a total of 695 end users. A full fix was identified and pushed 90 minutes after identification, and the effected customer platform was notified. # Impact * For periods of 30s - 7 minutes, customer dashboards would stay in loading state. * The Profit and Loss report would not load. * Other functionality including Bank Transaction categorization continued to function normally. * A total of 695 users were impacted over 2.5 weeks. * Only one Layer customer platform was affected. # Timeline * 2025-03: Low impact exception indicating a full data processing queue was first thrown in Layer system. At the time, it was happening in a non-critical system that did not have user impact. * 2025-07-13: Legacy system migration completes, and additional load is migrated to a new processing queue. The queue begins to intermittently be overwhelmed, at first for just seconds at a time. The full data processing queue exception is thrown, but Layer engineering team believes it to be the same cause as the previous low-impact exception and no action is taken. * 2025-07-29: Layer engineers notice profit and loss queries are not loading for a customer while testing an unrelated change. Immediate investigation reveals the issue is widespread for end-users with specific internal settings. # Resolution * Error investigation determined that the error traced back to the sensitive resource queue. * Further investigation revealed that the queue was full because of a shared resource across multiple work streams. * Queue resources were separated and normal performance resumed. * Client experience was manually verified to be working again, and logging was put in place to gain visibility any recurring incidence. # Next Steps * **[Done]** Implement safeguards against shared resources of performance sensitive queues. * Establish structured triage of ongoing exceptions, including those believed to be initially unimportant. * Implement specific alerting for performance degradations of customer requests. API Outage https://status.layerfi.com/incident/620260 Tue, 15 Jul 2025 19:37:00 -0000 https://status.layerfi.com/incident/620260#12d45388c20fde7a98f2c0ea6280287b44b98576ab5587be4585d5317ba8b75b Layer experienced intermittent API outages from 11:43am to 12:05pm PT today. During this time, both direct API calls and the embedded client experiences were unexpectedly unavailable. The root cause has been identified and resolved. Incident Report: January 14, 2024 Degraded Services https://status.layerfi.com/incident/496030 Tue, 14 Jan 2025 20:48:00 -0000 https://status.layerfi.com/incident/496030#ec5713b950fa76cac1ecd3330b612b47e1f411a1cb0df7da4cea7ba69abe398d ## Summary On January 14, 2024, from 1:09 AM to 12:48 PM PT, Layer’s API services were degraded. During this period, requests failed intermittently, with more than 50% of requests failing particularly between 7AM and 12:48PM PT. ## Impact 1. End-user SMB dashboards intermittently failed to load. SMBs would see loading states and error states. Refreshing would succeed in many but not all cases. 2. 3rd party data syncing was delayed, but no data loss occurred. 3. SMS delivery was significantly reduced, with only 15–20% of expected messages sent. All scheduled and queued messages were sent after recovery. 4. API operations to update businesses failed. Updates to businesses including adding plaid items and processor tokens may need to be retried. ## Root Cause At ~1AM PT this morning, a customer began bulk loading historical data to our /payouts endpoint, which did not have any rate limiting protection. This bulk loading caused resource contention on our server’s database connections as these operations weren’t batched, rate limited, or optimized for bulk loading. ## Resolution Layer engineers became aware of the issue early this morning. Unfortunately the issue caused multiple secondary problems we mistakenly believed to be the root cause. The root cause was identified at 12:30pm PT, rate limits were added on the sensitive endpoint, and full functionality was restored at 12:48PM PT. ## Follow-Up Actions: We’re implementing the following safeguards over the next 24 hours: 1. **Rate Limiting**: Layer is standardizing rate limit safeguards across all endpoints. We are setting limits that will not affect existing usage patterns and are intended to guard against exceptional scenarios like bulk loading large datasets. Documentation on all rate limits will be added shortly to Layer’s API documentation shortly. 2. **Profiling and Stress Testing**: We are beginning a round of stress testing to simulate high loads on multiple request flows. We will be dedicating time to improving performance sensitive workflows. Incident Report: January 14, 2024 Degraded Services https://status.layerfi.com/incident/496030 Tue, 14 Jan 2025 15:27:00 -0000 https://status.layerfi.com/incident/496030#c4a2ed0307d401e1d8182a85bf724e738dad45f180b200f98b70e81ffd60612d Requests are intermittently failing. Sandbox infra migration https://status.layerfi.com/incident/293456 Mon, 27 Nov 2023 00:23:00 -0000 https://status.layerfi.com/incident/293456#b519f230f56802391abba835ed2f06128ea5ea4265e2c540dc732345304b783d We are migrating sandbox infrastructure. No downtime should happen except at the point of cutover when in-progress requests will fail. All requests should succeed upon retry. Delayed transaction processing https://status.layerfi.com/incident/293452 Tue, 21 Nov 2023 21:04:00 -0000 https://status.layerfi.com/incident/293452#284cc16fe4995f7994d59471fcd735e11b3a2f2166007674ab65cf5989d0eff7 Due to high load, processing of new transactions is currently delayed. Transaction categorization, matching, and async report generation are experiencing delays up to 15 minutes.