<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Stradia Partners]]></title><description><![CDATA[Our mission is to help organizations pair pragmatic product strategy, low/no code technology, and data science to power their business processes.]]></description><link>https://blog.stradiapartners.com/</link><generator>Ghost 5.87</generator><lastBuildDate>Tue, 07 Apr 2026 14:27:37 GMT</lastBuildDate><atom:link href="https://blog.stradiapartners.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[How Retool Powers Our Client’s Workflows From Concept To Implementation]]></title><description><![CDATA[<p>Building software fast without sacrificing quality is difficult. Organizations that wish to develop and adapt technology face the &#x201C;iron triangle&#x201D; of balancing the trade-offs between building something good, fast, or inexpensive.</p><p>As consultants, we turn to Retool for our clients because of its powerful combination of speed and</p>]]></description><link>https://blog.stradiapartners.com/how-retool-powers-our-clients-workflows-from-concept-to-implementation/</link><guid isPermaLink="false">67b637d155397a000187e144</guid><dc:creator><![CDATA[Aaron Berdanier]]></dc:creator><pubDate>Wed, 19 Feb 2025 20:13:02 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1531762811413-ae581ee90344?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDIxfHxkb25rZXklMjBtb3VudGFpbmVlcnxlbnwwfHx8fDE3Mzk5OTU4NjB8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1531762811413-ae581ee90344?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDIxfHxkb25rZXklMjBtb3VudGFpbmVlcnxlbnwwfHx8fDE3Mzk5OTU4NjB8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="How Retool Powers Our Client&#x2019;s Workflows From Concept To Implementation"><p>Building software fast without sacrificing quality is difficult. Organizations that wish to develop and adapt technology face the &#x201C;iron triangle&#x201D; of balancing the trade-offs between building something good, fast, or inexpensive.</p><p>As consultants, we turn to Retool for our clients because of its powerful combination of speed and flexibility. We can use Retool&#x2019;s baseline components and customize how we process and display data, which gives us the ability to &#x201C;break&#x201D; the iron triangle for our clients.&#xA0;</p><p>In other words, with Retool, we can move fast and build quality software with limited resources. What Retool offers us is the flexibility to fit into each organization&#x2019;s unique workflow.</p><h2 id="prototyped-saas-applications"><strong>Prototyped SaaS Applications</strong></h2><p>As one example, we recently helped a client build and launch a small SaaS app within a month. The client had an idea in the legal-tech domain and a potential customer for a demo with a tight deadline. They needed their application to function as intended and to show the potential of the idea.</p><p>We were brought in to build fast and stand up a working prototype. Retool accelerated our development by giving us the toolbox to pull together a fully-functional system. We built it from the ground up in only a few weeks, helping them hit their deadline and demo for their customer without any hand-waving. It was a huge win for all parties, and we are currently expanding to build the full consumer-facing application entirely within Retool, from interface to database.</p><p>This example has shown us that Retool can be used to prototype and power revenue-generating applications.</p><h2 id="supercharged-internal-operations"><strong>Supercharged Internal Operations</strong></h2><p>We&#x2019;ve also used Retool for internal-facing applications. Another example is from our client flyExclusive, a private aviation company. Before using Retool, they were manually handling multiple internal workflows, from pilot flight reporting to maintenance records.</p><p>For workflows that require data transparency and customized experiences, like verifying flight logs, we&#x2019;ve used Retool to create highly-specific data management interfaces. These interfaces give the right users access to the data they need in real time. And managers can edit certain fields, which updates an external database as the source of truth.</p><p>With Retool, an added bonus is that we can update the workflow and interface very quickly. In a fast-paced business environment where teams are continuously iterating and improving their operating procedures, this means that we can not only build quickly, but we can modify the application quickly too.</p><p>For example, when this client changed their procedure for managing logs, we were able to update their workflow in a matter of days instead of spending much longer developing and deploying the update. The result is that the business can evolve and the technology can keep up.</p><h2 id="what-we%E2%80%99ve-learned"><strong>What We&#x2019;ve Learned</strong></h2><p>There is certainly still a need for traditional software development, when it comes to scaling an application to support a massive user base and processing large amounts of data quickly. We haven&#x2019;t hit the ceiling on either of these factors with Retool.&#xA0;</p><p>Instead, we&#x2019;ve found that we are able to build 90% of the necessary features with out-of-the-box Retool components and workflows. Additionally, our ability to quickly and flexibly fit into each organization&#x2019;s workflow helps us delight our clients who are used to working with expensive software teams and delayed launches.</p><p>Dedicating resources to new and exploratory projects is rarely possible without a proof-of-concept. What Retool allows us to do is to help our clients achieve their goals on tight timelines and with limited budgets. This is a differentiator for our clients who need to demonstrate a return on investment before they can request additional support.</p><h2 id="what%E2%80%99s-next"><strong>What&#x2019;s Next?</strong></h2><p>While we use Retool to get our clients to outcomes fast, we don&#x2019;t see it simply as a proof-of-concept system. Retool has become a key part of our toolbox because we believe that Retool can power fully-functional user applications without sacrificing quality.</p><p>We&#x2019;ve found Retool to be a great data platform for internal company metrics and automation workflows. The out-of-the-box database and flexible components are the ideal foundation for us to bring disparate bits of data together and then let the end users interact with them. And after implementation, these applications are easy to maintain, update, and expand.</p><p>We are excited to be Retool Agency Partners so that we can continue to help our clients organize their data and put it to work.</p>]]></content:encoded></item><item><title><![CDATA[Getting Structured Data Out of Text with LLMs]]></title><description><![CDATA[<p>We&#x2019;ve done multiple projects to automatically pull structured data from text, including:</p><ul><li><a href="https://blog.stradiapartners.com/how-we-read-25k-emails-a-month-to-drive-revenue-with-ai/" rel="noreferrer">Automatically parsing emails</a></li><li>Scanning research articles</li><li>Processing customer feedback reports</li></ul><p>This is called <em>natural language processing</em>, where we use computers to process information contained in unstructured text. This is increasingly done with large language models (LLMs)</p>]]></description><link>https://blog.stradiapartners.com/getting-structured-data-out-of-text-with-llms-2/</link><guid isPermaLink="false">669fe0497ba7400001c0ce9a</guid><dc:creator><![CDATA[Aaron Berdanier]]></dc:creator><pubDate>Wed, 24 Jul 2024 13:00:29 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1613981948475-6e2407d8b589?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE5fHxydXNzaWFuJTIwZG9sbHxlbnwwfHx8fDE3MjE3NTUxMTR8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1613981948475-6e2407d8b589?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE5fHxydXNzaWFuJTIwZG9sbHxlbnwwfHx8fDE3MjE3NTUxMTR8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Getting Structured Data Out of Text with LLMs"><p>We&#x2019;ve done multiple projects to automatically pull structured data from text, including:</p><ul><li><a href="https://blog.stradiapartners.com/how-we-read-25k-emails-a-month-to-drive-revenue-with-ai/" rel="noreferrer">Automatically parsing emails</a></li><li>Scanning research articles</li><li>Processing customer feedback reports</li></ul><p>This is called <em>natural language processing</em>, where we use computers to process information contained in unstructured text. This is increasingly done with large language models (LLMs) because they provide amazing flexibility for processing natural language with intuitive inputs.</p><p>LLMs are next-token generators. Acting kind of like a <a href="https://www.newyorker.com/magazine/2023/11/20/geoffrey-hinton-profile-ai?ref=blog.stradiapartners.com" rel="noreferrer">glorified autocomplete</a>, they generate text that is most likely to come next based on what was given as an input. To generate structured text, we need to guide the model for what output we want. </p><p><strong>Here I&apos;m going to show you how we do that with some basic prompts.</strong></p><h2 id="structured-formats">Structured Formats</h2><p>There are a few <a href="https://www.makeuseof.com/xml-json-yaml-how-they-differ/?ref=blog.stradiapartners.com" rel="noreferrer">common formats</a> for structuring data: JSON, YAML, and XML. These are all designed so that they can be used by a computer program&#x2014;whether that is sticking data into a spreadsheet or running a database operation or code function.</p><p>Luckily, most foundation LLMs have been trained on many examples of structured data, so they have an understanding of these data formats out of the box. This means we can usually get pretty good results with zero- or few-shot learning (i.e., no need for a custom-trained model).</p><p>Let&#x2019;s take a look at an example and see how this works. For all of these examples, I&#x2019;m using <em>Mistral 7B Instruct </em>(which we could hook up for you through Amazon Bedrock!), but it works similarly in other models as well. Here is the input for all of these examples, where we just substituted <em>{format}</em> with JSON, YAML, or XML.</p><pre><code>Extract the name, email address, and sentiment (positive, neutral, or negative) from this EMAIL. 

Return the OUTPUT as formatted {format} with keys for sentiment and contact. 
Contact should have subkeys for name and email. 
Wrap it all in a result key.

EMAIL:
From: aaron@stradiapartners.com
Subject: Help
Body:
Hi,
I&apos;m really happy with my product, but I need some help.
can you write me back?
Aaron

OUTPUT:
</code></pre>
<h3 id="json-%E2%80%93-nests-key-value-pairs-in-brackets">JSON &#x2013; nests key-value pairs in brackets</h3><p><strong>Pros:</strong></p><ul><li>Handles numbers and text differently, simplifying post-processing</li><li>Strong support in the newer OpenAI models, with specific JSON training and built-in output validation</li></ul><p><strong>Cons:</strong></p><ul><li>Brackets and quotations add extra tokens, increasing cost and decreasing efficiency</li></ul><p><strong>Output (57 tokens):</strong></p><pre><code>{
  &quot;result&quot;: {
    &quot;contact&quot;: {
      &quot;name&quot;: &quot;Aaron&quot;,
      &quot;email&quot;: &quot;aaron@stradiapartners.com&quot;
    },
    &quot;sentiment&quot;: &quot;positive&quot;
  }
}
</code></pre>
<h3 id="yaml-%E2%80%93-uses-indents-to-identify-hierarchical-structure">YAML <strong>&#x2013;</strong> uses indents to identify hierarchical structure</h3><p><strong>Pros:</strong></p><ul><li>Easily human readable</li><li>Without extra &#x201C;fluff&#x201D; it is more efficient and cheaper than other formats</li></ul><p><strong>Cons:</strong></p><ul><li>Easy to mess up the indents, potentially increasing errors (although I haven&#x2019;t quantified that)</li></ul><p><strong>Output (30 tokens):</strong></p><pre><code>result:
  contact:
    email: aaron@stradiapartners.com
    name: Aaron
  sentiment: positive
</code></pre>
<h3 id="xml-%E2%80%93-wraps-data-in-tags">XML &#x2013; wraps data in tags</h3><p><strong>Pros:</strong></p><ul><li>Explicit wrappers ensure &#x201C;extra&#x201D; text that a model might return are ignored (sometimes models, especially &#x201C;chatty&#x201D; models, like to add extra context before or after the structured data like &#x201C;Here is your data: &#x2026; Do you need anything else?&#x201D;)</li></ul><p><strong>Cons:&#xA0;</strong></p><ul><li>Opening and closing tags make it extremely verbose, increasing cost and decreasing efficiency</li></ul><p><strong>Output (57 tokens):</strong></p><pre><code>&lt;result&gt;
  &lt;contact&gt;
    &lt;name&gt;Aaron&lt;/name&gt;
    &lt;email&gt;aaron@stradiapartners.com&lt;/email&gt;
  &lt;/contact&gt;
  &lt;sentiment&gt;positive&lt;/sentiment&gt;
&lt;/result&gt;
</code></pre>
<h2 id="challenges">Challenges</h2><p>As <a href="https://en.wikipedia.org/wiki/Stochastic_parrot?ref=blog.stradiapartners.com" rel="noreferrer">stochastic parrots</a>, LLMs are not without issues. </p><p>For structured data, the first challenge is what happens if the model returns an incomplete format? In the JSON example, if the model finishes before adding the closing <em>}</em> bracket, then the computer won&apos;t be able to properly parse it. This can sometimes be fixed by doing code checks after running it and then rerunning if there is an error. Some models also add explicit generation checks that require valid formatted output.</p><p>The second challenge is hallucination of data. Sometimes hallucination can be fixed by providing more specifications in the input (in the sentiment example, if we don&apos;t give a specific list of the options then the model might get creative with the output) or by lowering the randomness temperature.</p><p>Other times, unclear inputs might confuse the output (e.g., if there are multiple names in the email, it could inadvertently grab the wrong one), or just simply interpolate results that aren&apos;t there (maybe based on content that the model had seen in its training set).</p><p><strong>There are a few tricks for avoiding these shortcomings.</strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://images.unsplash.com/photo-1474164281465-ff16877f9f6a?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDQwfHxHdWFyZHJhaWx8ZW58MHx8fHwxNzIxNzU2MDk4fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" class="kg-image" alt="Getting Structured Data Out of Text with LLMs" loading="lazy" width="5616" height="3744" srcset="https://images.unsplash.com/photo-1474164281465-ff16877f9f6a?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDQwfHxHdWFyZHJhaWx8ZW58MHx8fHwxNzIxNzU2MDk4fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=600 600w, https://images.unsplash.com/photo-1474164281465-ff16877f9f6a?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDQwfHxHdWFyZHJhaWx8ZW58MHx8fHwxNzIxNzU2MDk4fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1000 1000w, https://images.unsplash.com/photo-1474164281465-ff16877f9f6a?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDQwfHxHdWFyZHJhaWx8ZW58MHx8fHwxNzIxNzU2MDk4fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1600 1600w, https://images.unsplash.com/photo-1474164281465-ff16877f9f6a?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDQwfHxHdWFyZHJhaWx8ZW58MHx8fHwxNzIxNzU2MDk4fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2400 2400w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Photo by </span><a href="https://unsplash.com/@naomi_august?ref=blog.stradiapartners.com"><span style="white-space: pre-wrap;">Naomi August</span></a><span style="white-space: pre-wrap;"> / </span><a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit"><span style="white-space: pre-wrap;">Unsplash</span></a></figcaption></figure><h2 id="adding-extra-guardrails">Adding Extra Guardrails</h2><h3 id="specifying-the-schema">Specifying the Schema</h3><p>This can help ensure that the output is formatted correctly. And OpenAPI specifications provide a clear way to do this. In fact, OpenAI models have the option to input a specific JSON-formatted schema in the OpenAPI format.</p><p>From the <a href="https://swagger.io/docs/specification/data-models/representing-xml/?ref=blog.stradiapartners.com" rel="noreferrer">OpenAPI documentation</a>, here is an example showing how a model specification (in this case in YAML format! The JSON format is also common) can represent data in JSON or XML format:</p><p><strong>Model specification</strong></p><pre><code>components:
&#xA0;schemas:
&#xA0;&#xA0;&#xA0;book:
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;type: object
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;properties:
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;id:
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;type: integer
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;title:
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;type: string
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;author:
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;type: string
</code></pre>
<p><strong>JSON data</strong></p><pre><code>{
&#xA0;&quot;id&quot;: 0,
&#xA0;&quot;title&quot;: &quot;string&quot;,
&#xA0;&quot;author&quot;: &quot;string&quot;
}
</code></pre>
<p><strong>XML data</strong></p><pre><code>&lt;book&gt;
&#xA0;&lt;id&gt;0&lt;/id&gt;
&#xA0;&lt;title&gt;string&lt;/title&gt;
&#xA0;&lt;author&gt;string&lt;/author&gt;
&lt;/book&gt;
</code></pre>
<h3 id="including-a-few-examples">Including a Few Examples</h3><p>This can provide additional guidance to the model through a &quot;worked&quot; sample. To do this, we might include a sample EMAIL and OUTPUT in the desired format before the actual EMAIL that we need to process.</p><h3 id="fine-tuning">Fine-Tuning</h3><p>Finally, for high-risk or mission-critical cases where it is important to get it right, fine-tuning the model with lots of training samples can really lock in the output for what you need. We&apos;ve found fine-tuning to increase accuracy from the low-70%s to over 90%.</p><hr><p><strong>As for which format to choose, </strong>this decision depends on the data you need to process and the model you plan to use. Generally, we like JSON for OpenAI models and XML for other foundation models. We find that the extra token costs and processing time are rarely a concern, and these formats allow for more explicit definition.</p><p>This kind of project is exciting for us because it allows us to tailor the output to your specific needs, which lets us dig into what you need to get out of it. We also find that this kind of project can save a lot of time for people, for example speeding up how quickly a team can work through hundreds or thousands of messages, and sometimes even increase &quot;objectivity&quot; if multiple people are working on the same task.</p><p>We&apos;d love to hear how you think this can be helpful for you! <a href="mailto:info@stradiapartners.com" rel="noreferrer">Email us to talk.</a></p>]]></content:encoded></item><item><title><![CDATA[Augmenting Airtable with Serverless]]></title><description><![CDATA[<p>A disclaimer, this post is not about best practices or even prescribing a solution. This article will outline a unique challenge and the approach we took to get past the hurdle. So let&apos;s talk about what we faced! Over the last 3 years, we helped a client transition</p>]]></description><link>https://blog.stradiapartners.com/augmenting-airtable-with-serverless/</link><guid isPermaLink="false">6689d9467ba7400001c0cdc5</guid><dc:creator><![CDATA[Sebastian Florez]]></dc:creator><pubDate>Sun, 07 Jul 2024 17:12:34 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1667984390527-850f63192709?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE0fHxjbG91ZCUyMGNvbXB1dGV8ZW58MHx8fHwxNzIwMzEwMzcxfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1667984390527-850f63192709?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE0fHxjbG91ZCUyMGNvbXB1dGV8ZW58MHx8fHwxNzIwMzEwMzcxfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Augmenting Airtable with Serverless"><p>A disclaimer, this post is not about best practices or even prescribing a solution. This article will outline a unique challenge and the approach we took to get past the hurdle. So let&apos;s talk about what we faced! Over the last 3 years, we helped a client transition a lot of their internal tooling over to a tool called Airtable. So you may be wondering why Airtable, and the answer is simple: <strong>speed</strong>. I mean, let&apos;s be honest, many of us have been there, a room with executives that wanted something yesterday, and your team is tasked with building a time machine.</p><h2 id="the-challenge"><strong>The Challenge</strong></h2><p>Now that you understand the motivation, let&apos;s fast forward. We are fully on board with Airtable; we have architected a core set of environments that meet the needs of specific departments, and folks are off to the races! Well, not quite. The peak hours of this client are between 2 p.m. and 5 p.m., and coincidentally, that&apos;s also when we started to notice that high-traffic environments are starting to degrade. For some context, Airtable gives you the ability to define automated workflows using a mechanism in their software called automations. The automations give the user a really easy interface with which they can define the workflow along with the conditions that will trigger it. Once those conditions are met, the automation is executed, and the user-defined workflow is run. There are some limitations to be aware of, however:</p><ul><li>An Airtable base is limited by the number of automations you can create.</li><li>A single automation instance has a runtime limit of 30 seconds.</li><li>A single automation instance can make a set number of HTTP requests.</li><li>The concurrency of these automations is handled by Airtable behind the scenes.</li><li>An Airtable account is limited by the number of total runs a workspace can execute.</li><li>Automation compute shares resources with the rest of the operations occurring in a base. e.g., user actions, API calls, etc.</li></ul><h2 id="what-we-did-about-it"><strong>What We Did About It</strong></h2><p>The first lever we made use of was working directly with Airtable on possible performance enhancements we could make. We did end up getting a higher compute instance provisioned for high traffic bases. We knew that the larger instance would not be enough; we needed to be able to scale past the amount of user interaction we had now, so what was left? The key is identifying that automations share compute resources with the rest of the base, so how could we offload the CPU time? Here is what we did.</p><ul><li>We asked Airtable for the CPU and memory metrics of all the automations in our high traffic instances.</li><li>We used that list to identify candidates for &quot;promotion&quot; out of Airtable.</li><li>We leveraged Airtable&apos;s webhooks to architect an &quot;automation&quot; solution from our end.</li><li>In the event that the output of an automation had to interact with a base, we used Airtable&apos;s Rest API to push those results back.</li></ul><h2 id="here-is-what-it-looked-like"><strong>Here Is What It Looked Like</strong></h2><figure class="kg-card kg-image-card"><img src="https://blog.stradiapartners.com/content/images/2024/07/diagram-export-7-6-2024-8_56_08-PM-1.png" class="kg-image" alt="Augmenting Airtable with Serverless" loading="lazy" width="2000" height="543" srcset="https://blog.stradiapartners.com/content/images/size/w600/2024/07/diagram-export-7-6-2024-8_56_08-PM-1.png 600w, https://blog.stradiapartners.com/content/images/size/w1000/2024/07/diagram-export-7-6-2024-8_56_08-PM-1.png 1000w, https://blog.stradiapartners.com/content/images/size/w1600/2024/07/diagram-export-7-6-2024-8_56_08-PM-1.png 1600w, https://blog.stradiapartners.com/content/images/2024/07/diagram-export-7-6-2024-8_56_08-PM-1.png 2000w" sizes="(min-width: 720px) 720px"></figure><p>We moved the compute over to a serverless function using AWS. This unlocks a few things for us:</p><ul><li>We moved valuable CPU load out of high-traffic Airtable bases.</li><li>We gained some control over the concurrency of automation runs.</li><li>A longer runtime for complex operations.</li><li>Access to our own metrics over the lifecycle of any single run.</li></ul><p>We saw immediate results, and our users also noticed. Hurray! Of course, while this approach did solve our immediate problems, it added a whole new level of complexity to managing our Airtable instance, so we carefully decided what should be &quot;promoted.&quot;.</p><p>We learned a lot during this excursion, and we could always improve the solution. It shows that clever solutions exist; you just need a little know-how, and the tool you use needs to allow some room for extension. We love finding interesting solutions to tough problems, so if you are interested in partnering with us,<strong> </strong><a href="https://stradiapartners.com/?ref=blog.stradiapartners.com" rel="noreferrer"><strong>reach out and let&apos;s chat</strong></a><strong>.</strong></p>]]></content:encoded></item><item><title><![CDATA[How We Read 25k Emails a Month to Drive Revenue with AI]]></title><description><![CDATA[<p>Generative AI is touted as a tool for increasing revenue and reducing expenses. It is easy to get lost in the hype about the future and miss the amazing operational workflows that can be improved today.</p><p>For over a year and a half with one of our clients we&#x2019;</p>]]></description><link>https://blog.stradiapartners.com/how-we-read-25k-emails-a-month-to-drive-revenue-with-ai/</link><guid isPermaLink="false">667f12ca7df4e90001f0b58f</guid><dc:creator><![CDATA[Aaron Berdanier]]></dc:creator><pubDate>Fri, 05 Jul 2024 14:00:16 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1559057287-ce0f595679a8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDEzfHxtYWlsfGVufDB8fHx8MTcxOTYwNDA5NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1559057287-ce0f595679a8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDEzfHxtYWlsfGVufDB8fHx8MTcxOTYwNDA5NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="How We Read 25k Emails a Month to Drive Revenue with AI"><p>Generative AI is touted as a tool for increasing revenue and reducing expenses. It is easy to get lost in the hype about the future and miss the amazing operational workflows that can be improved today.</p><p>For over a year and a half with one of our clients we&#x2019;ve been reading over 25,000 inbound sales emails a month, extracting the relevant data with a fine-tuned <strong>OpenAI</strong> GPT model, and adding it to an <strong>Airtable</strong> base for their sales team to action.</p>
<!--kg-card-begin: html-->
<img src="https://stradia-images.s3.us-east-2.amazonaws.com/ai_workflow.png" alt="How We Read 25k Emails a Month to Drive Revenue with AI">
<!--kg-card-end: html-->
<p><strong>The challenge</strong> was that the sales team was drowning in emails that they were managing out of a shared email inbox. Emails went unanswered and there wasn&#x2019;t clarity on how many requests were actually received, so it was difficult to measure conversion.</p><p>Now emails can go from receipt to quoted in a matter of minutes. Salespeople can review and send their quotes with the click of a button in Airtable. Time that used to be spent on manual data entry (read the email, identify the request, check other similar requests, create the quote) can now be spent actually calling the customer and closing the deal.</p><h2 id="how-we-did-it">How We Did It</h2><p>We went from zero to launch in less than two months, increasing the quote volume &gt;10x in a matter of weeks.</p><p>We did this by:</p><ol><li>Getting alignment across all teams,</li><li>Starting with a thin vertical slice, and</li><li>Improving and adjusting incrementally.</li></ol><p>The project was spearheaded by an executive team that was excited by the opportunity of technology. They worked to ensure that all teams were on board, which is important for this type of project because of the broad impacts on the company processes and people.</p><p>For the technology, our first tests used OpenAI&#x2019;s text-davinci-003 with a specified output format and only a few prompt examples. This didn&#x2019;t work very well (maybe 50-50 for this application, but note: this type of few-shot learning can work really well in other applications), so we quickly pivoted to fine-tuning. We collected around 100 examples of emails that we manually parsed and used to train the model. Then we launched the system for the salespeople with a limited set of customers (YOLO, as they say).</p><p>After launching, we learned a lot really quickly. At first, some of the salespeople were concerned about their jobs being taken by robots. Our internal detractors were quick to point out all of the errors. Our internal early adopters were also able to start increasing their output. So we started the next phase of improving the system.</p><p>With another testing set, we learned that the accuracy of the model was around 70%. Not bad, but not really good enough for unsupervised usage. So we retrained the model with hundreds of new examples, focusing on some of the edge error cases that the sales team identified.&#xA0;</p><p>By this time, OpenAI had released upgrades to their model with gpt-3.5-turbo, which helped with accuracy, and also released Function Calling, which improved the formatting issues. With these updates and our expanded new training set, we were able to get out-of-sample accuracy up to 96%. Pretty impressive!</p><p>The system has certainly needed maintenance and additional updates since then as the business requirements have evolved. But, the core functions have been humming along in the background, churning out structured data and empowering the client to increase sales.</p><p>We&#x2019;re interested in helping other clients leverage AI in useful ways, whether it is reading unstructured data or something else entirely different. <strong>Reach out to talk to us!</strong></p>]]></content:encoded></item><item><title><![CDATA[Low Tech De-Risking (Taste the Food First!)]]></title><description><![CDATA[Listen, just taste the food before ordering $120,000 of it, okay?]]></description><link>https://blog.stradiapartners.com/non-technical-derisking-before-writing-code/</link><guid isPermaLink="false">667dd9f67df4e90001f0b532</guid><category><![CDATA[Product Management]]></category><category><![CDATA[Testing]]></category><category><![CDATA[Low Code]]></category><category><![CDATA[No Code]]></category><category><![CDATA[Software]]></category><category><![CDATA[Consulting]]></category><category><![CDATA[Management]]></category><category><![CDATA[Risk]]></category><category><![CDATA[Product]]></category><dc:creator><![CDATA[Catie King]]></dc:creator><pubDate>Wed, 03 Jul 2024 16:43:13 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1577106263724-2c8e03bfe9cf?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDN8fGNoZWZ8ZW58MHx8fHwxNzIwMDIzODMzfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1577106263724-2c8e03bfe9cf?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDN8fGNoZWZ8ZW58MHx8fHwxNzIwMDIzODMzfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Low Tech De-Risking (Taste the Food First!)"><p>Let&apos;s imagine you&apos;re opening a restaurant. You dream of being the hardest-to-snag dinner reservation on the block.</p><p>A friend calls and says, &quot;You have to hire Alfredo as Head Chef. His cooking is sublime.&quot;</p><p>Would you: A) Meet Alfredo, get to know him, look at his resume, and taste his famous pasta or B) Hire Alfredo as your Head Chef right away?</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.stradiapartners.com/content/images/2024/07/louis-hansel-v3OlBE6-fhU-unsplash-2.jpg" class="kg-image" alt="Low Tech De-Risking (Taste the Food First!)" loading="lazy" width="2000" height="3079" srcset="https://blog.stradiapartners.com/content/images/size/w600/2024/07/louis-hansel-v3OlBE6-fhU-unsplash-2.jpg 600w, https://blog.stradiapartners.com/content/images/size/w1000/2024/07/louis-hansel-v3OlBE6-fhU-unsplash-2.jpg 1000w, https://blog.stradiapartners.com/content/images/size/w1600/2024/07/louis-hansel-v3OlBE6-fhU-unsplash-2.jpg 1600w, https://blog.stradiapartners.com/content/images/2024/07/louis-hansel-v3OlBE6-fhU-unsplash-2.jpg 2000w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Alfredo, working hard presumably</span></figcaption></figure><p>The success of your restaurant hinges upon this decision! So obviously you&apos;d go with option A. But throughout my career in software development, I&apos;ve seen leaders &quot;hire Alfredo&quot; sight unseen again and again.</p><p>Let me explain.</p><p>In a previous role, an executive came to me with an innovative idea to increase key KPIs with new technology workflows. He was ready to dedicate 3 months of our product roadmap to making the dream a reality. </p><p>I considered one week of my team&apos;s time to cost about $20,000, and that&apos;s before considering the opportunity cost of having them focus on this project instead of other valuable known-quantity projects. This project would cost a minimum of $120,000&#x2013; a huge investment for an unknown ROI.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.stradiapartners.com/content/images/2024/07/thisisengineering-uOhBxB23Wao-unsplash-1.jpg" class="kg-image" alt="Low Tech De-Risking (Taste the Food First!)" loading="lazy" width="2000" height="1334" srcset="https://blog.stradiapartners.com/content/images/size/w600/2024/07/thisisengineering-uOhBxB23Wao-unsplash-1.jpg 600w, https://blog.stradiapartners.com/content/images/size/w1000/2024/07/thisisengineering-uOhBxB23Wao-unsplash-1.jpg 1000w, https://blog.stradiapartners.com/content/images/size/w1600/2024/07/thisisengineering-uOhBxB23Wao-unsplash-1.jpg 1600w, https://blog.stradiapartners.com/content/images/2024/07/thisisengineering-uOhBxB23Wao-unsplash-1.jpg 2000w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">&quot;Hey, we can test this!&quot;</span></figcaption></figure><p>I suggested that before jumping into the project, we should try to test some of the riskiest assumptions with no-code processes. The cheapest, fastest way to learn whether our features would work was to change our services workflows to adopt them without the supporting workflow technology.</p><p>So how would we test? Someone needed to train our Services team to offer alternatives when an original customer request couldn&apos;t be accommodated. We then needed to measure how often the client took the alternative, and thus captured additional revenue. Conversations could be tracked and recorded in <a href="https://www.airtable.com/about?ref=blog.stradiapartners.com" rel="noreferrer">Airtable</a>. The cost of this approach was a bit of Airtable configuration and training time. The outcome would be evidence our development time would be worth it.</p><p>So what happened? He didn&apos;t want to initiate the test or wait for the results. He wanted to &quot;hire Alfredo&quot;&#x2013; a $120,000 bet.</p><p>We started <a href="https://www.airtable.com/about?ref=blog.stradiapartners.com" rel="noreferrer">Stradia Partners</a> to help people approach problems like this differently&#x2013; even if it means less money for us. </p><p><strong>Our process would look something like this:</strong></p><ol><li>Start by calculating the size of the bet you&apos;re making with development. <strong>What will it cost to build in time, money, and opportunity?</strong><ol><li>For my example, the cost is $120,000 of development time + opportunity cost of not working on previously validated roadmap projects.</li></ol></li><li>Identify the things that need to be true for the bet to pay off (risks). List them.<ol><li>In my example, the bet would pay off if customers were willing to take alternative options and the revenue gained from those sales was more than the cost of development, maintenance, training and adoption.</li></ol></li><li>See if you can de-risk the bet prior to making it. <strong>This is known as a </strong><a href="https://modelthinkers.com/mental-model/riskiest-assumption-test?ref=blog.stradiapartners.com" rel="noreferrer"><strong>Riskiest Assumption Test (RAT)</strong></a><strong>.</strong><ol><li>If customers will take the alternative over the phone or email, then it&apos;s likely they&apos;d take it in an app (bet = validated).</li><li>If customers don&apos;t take the alternative over the phone or email, then it&apos;s even less likely they&apos;d take it in the app (bet = invalidated).</li></ol></li><li>Run the test <strong>(taste the food)</strong>; measure. </li><li>Use the test results to evaluate whether there&apos;s a case for the bet to pay for the cost calculated in step 1. If so, make the bet <strong>(hire Alfredo).</strong> </li></ol><p>Want help tasting the food and deciding whether to hire Alfredo? We&apos;re in your corner; <a href="https://stradiapartners.com/?ref=blog.stradiapartners.com" rel="noreferrer">shoot us a note</a>.</p><p>Bon App&#xE9;tit!</p>]]></content:encoded></item><item><title><![CDATA[Avoiding (or Fixing) a Data Dumpster Fire]]></title><description><![CDATA[<p>&#x1F7E7;<br>
&#x1F7E5; &#x1F7E7;</p>
<h2 id="garbage-in-garbage-out">Garbage In, Garbage Out</h2><p><em>You know the saying.</em></p><p>When you use data in your organization&#x2014;for anything from reporting, analytics, automation, etc.&#x2060;&#x2014;you run the risk of building a giant trash pile. This isn&#x2019;t your fault. Sometimes it just happens as the</p>]]></description><link>https://blog.stradiapartners.com/avoiding-or-fixing-a-data-dumpster-fire/</link><guid isPermaLink="false">667ebaed7df4e90001f0b53b</guid><dc:creator><![CDATA[Aaron Berdanier]]></dc:creator><pubDate>Wed, 03 Jul 2024 13:46:51 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1662826479300-6cc4ced8a1cc?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDM0fHxkdW1wc3RlcnxlbnwwfHx8fDE3MTk1ODE2NDV8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1662826479300-6cc4ced8a1cc?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDM0fHxkdW1wc3RlcnxlbnwwfHx8fDE3MTk1ODE2NDV8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Avoiding (or Fixing) a Data Dumpster Fire"><p>&#x1F7E7;<br>
&#x1F7E5; &#x1F7E7;</p>
<h2 id="garbage-in-garbage-out">Garbage In, Garbage Out</h2><p><em>You know the saying.</em></p><p>When you use data in your organization&#x2014;for anything from reporting, analytics, automation, etc.&#x2060;&#x2014;you run the risk of building a giant trash pile. This isn&#x2019;t your fault. Sometimes it just happens as the sources of data grow and as the business requirements change.</p><p>But, we&#x2019;ve seen the consequences, and they can be severe:</p><ul><li>Employees performing duplicate data entry</li><li>Executives arguing over the provenance of the data</li><li>Data products that don&#x2019;t deliver on the AI hype</li></ul><p>To get past these barriers, it is important to recognize that your data ecosystem will evolve as your organization&#x2019;s needs change. Then, you can focus on laying a solid foundation for your current (or next) use case.</p><p>&#x1F7E8;<br>
&#x1F7E7; &#x1F7E8;<br>
&#x1F7E5; &#x1F7E7; &#x1F7E8;</p>
<h2 id="an-example">An Example</h2><p>I&#x2019;ll explain our framework below, but first a small example.</p><p>One of our clients recently wanted to record expenses for events. Ultimately they were interested in seeing how close the cost of each event was to the initial budget. The existing solution had expense records with an estimated and actual cost. This makes sense data-wise, but the challenge was that entering the actual cost for an expense required finding the initial estimate record, which was difficult. Instead, we set them up with a single data entry source where each expense was either an estimate or an actual.</p><p>We like to be pragmatic about data structures. This isn&#x2019;t a perfect solution and there are definitely other ways to do it. You might argue that this design is insufficient because it doesn&#x2019;t allow the client to drill down and see which items caused the event to go over or under. But, it is also important to remember that linking the expenses to their estimates would be difficult for the employees to maintain and extremely uncertain with an automated solution. Likely to lead to a data garbage pile.</p><p><strong>Instead, our goal is to set up a foundation so that clients can build and iterate.</strong> Now the client is able to easily enter expenses and calculate overall accuracy. Additionally, they are set up to automate expense entry in the future with AI by collecting data from receipts without worrying about matching specific expense records.</p><p>&#x1F7E6;<br>
&#x1F7E9; &#x1F7E6;<br>
&#x1F7E8; &#x1F7E9; &#x1F7E6;<br>
&#x1F7E7; &#x1F7E8; &#x1F7E9; &#x1F7E6;<br>
&#x1F7E5; &#x1F7E7; &#x1F7E8; &#x1F7E9; &#x1F7E6;</p>
<h2 id="the-data-product-pyramid">The Data Product Pyramid</h2>
<!--kg-card-begin: html-->
<img src="https://stradia-images.s3.us-east-2.amazonaws.com/data_product_pyramid-2.png" alt="Avoiding (or Fixing) a Data Dumpster Fire">
<!--kg-card-end: html-->
<p>Years ago I wrote about the <a href="https://medium.com/towards-data-science/no-one-needs-your-data-a175756b2c16?ref=blog.stradiapartners.com" rel="noreferrer">data product pyramid</a>, which is a framework for understanding how data products are built. The idea is that &#x201C;higher-level&#x201D; outcomes&#x2014;stuff like workflow automation and decision support&#x2014;require a solid base layer of robust data without major gaps, inconsistent sources, and data entry errors.</p><p>When we work with clients, we emphasize the importance of continuing to check the base and expanding incrementally. Raw data are at the base because they are the least refined. Each successive level builds off of the previous in a data hierarchy of needs. This stable foundation is even more pressing as organizations try to embed AI into their processes, as these tools rely on clean and consistent inputs to produce value.</p><p><strong>Our approach is not one-size-fits-all</strong>. Avoiding (or fixing) a data dumpster fire involves understanding where each client is headed and what they have as their current foundation. We use the data product pyramid to guide our discovery. From there, we can work together to build a great product or tool, one layer at a time.</p><p>&#x1F7EA;<br>
&#x1F7E6; &#x1F7EA;<br>
&#x1F7E9; &#x1F7E6; &#x1F7EA;<br>
&#x1F7E8; &#x1F7E9; &#x1F7E6; &#x1F7EA;<br>
&#x1F7E7; &#x1F7E8; &#x1F7E9; &#x1F7E6; &#x1F7EA;<br>
&#x1F7E5; &#x1F7E7; &#x1F7E8; &#x1F7E9; &#x1F7E6; &#x1F7EA;</p>
]]></content:encoded></item></channel></rss>