Image

Can You Trust AI For Plant Care?

Why Garden Advice Bots Still Get It Wrong
Reading Time: 17 - 23 minutes (5080 words)
Published: October 8, 2025


In this UG article, I'll explore how AI’s speed, scale, and data biases collide with the slow, context-rich world of plant care, both indoor and out in our gardens. I’ll look at where AI advice goes wrong, how to spot unreliable information, and what practical steps unlikely gardeners can take to use these tools wisely.

Introduction

AI moves fast. Plants don’t. That mismatch explains why AI-generated gardening advice can sound perfectly confident while quietly missing the mark. Large language models (LLMs, like Chat GPT, Claude, Grok, Gemini, and Copilot) learn from what’s most plentiful online, not what’s most accurate. As a result, vast piles of hobby blogs, recycled forum posts, influencer chatter, dated newsletters, and old books dominate their training data. Meanwhile, the most reliable horticultural research is often locked behind paywalls, written for commercial growers, or simply not directly relevant to the realities of houseplants and home gardens without a lot of interpolation.

This UG post isn't about crapping on AI entirely, AI has genuine value when used correctly. But understanding where these systems fail, why they fail, and how to extract reliable information requires looking at both the technical limitations of the technology and the fundamental mismatch between how AI learns and how plants actually grow.

The problem extends beyond dedicated AI chatbots. Google's AI Overviews (the AI-generated summaries that now appear at the top of many search results) inherit the same fundamental issues while carrying the implicit authority of the Google brand. Many users trust Google to surface the most accurate information, not realising that organic search results are primarily ranked by popularity metrics and SEO optimisation rather than scientific accuracy. When Google's AI then summarises these popularity-driven results, it amplifies the same myths and anecdotes that plague other AI systems.

The Core Problem: Training Data ≠ Grounded Truth

AI models learn by imitating patterns in their training data. If the web over-represents anecdotal advice like "snake plants thrive in low light" or "always use chunky soil," large language models will repeat those patterns with fluent confidence. Researchers, like Emily Bender have described this as the "stochastic parrots" problem: systems that can produce human-like text without genuine understanding of whether that text reflects reality.

Why the Internet Is a Skewed Teacher

The mathematical reality is stark: AI training favours volume over verification.

High Volume, Low Value Content:

  • Personal gardening blogs and sales driven influencer posts
  • Forum discussions and social media threads
  • Biased gardening websites (often sales optimised rather than accuracy-optimised)
  • YouTube transcripts from popular plant & gardener influencers
  • Gardening newsletters and lifestyle magazines (heavily Ad supported)

Low Volume, High Value Content:

  • Peer-reviewed horticultural research and reports
  • Agricultural extension publications (much better but often imperfect)
  • Academic journals focused on plant physiology
  • Professional greenhouse and nursery management studies
  • Controlled trials with documented environmental parameters

For every peer-reviewed paid study on optimal indoor or garden plant care practices, there are thousands of free blog posts offering contradictory advice based purely on personal anecdotal experience. When AI training data favours volume over verification, these AI systems give equal or greater weight to some of the the most-repeated myths.

This same problem affects Google search results. Google's organic search rankings prioritise engagement metrics, backlinks, and SEO optimisation (essentially, popularity signals) rather than scientific accuracy (PS - I do this in my day job). A well-optimised blog post with dozens of backlinks will outrank a university extension bulletin with better information but weaker SEO. Users searching for "how to repot an Alocasia" will typically see the most popular advice at the top of results, not necessarily the most accurate info regarding the challenges and dangers of repotting a plant.

Google's AI Overviews compound this problem. When Google generates AI summaries at the top of search results, it draws from the same popularity-biased corpus that organic search surfaces. The AI inherits all the training data problems discussed in this UG article while appearing with Google's authoritative branding. Users who might be sceptical of a random blog post may trust an AI Overview implicitly because it comes from Google, even though it's synthesising the same unreliable sources.

Indoor conditions are dramatically under-studied. The bulk of horticultural research focuses on commercial greenhouse operations, outdoor landscaping, or field crop production. These environments bear little resemblance to the conditions most of our houseplants face: light density, AKA - PPFD, also known as photosynthetic photon flux density (typically 50–150 µmol/m²/s versus 300–800+ in greenhouses), stagnant air, low humidity, unstable temperatures from heating and cooling systems, and highly variable soilless substrates with limited microbial activity.

Legacy myths persist and amplify. The famous NASA clean air study gets endlessly recycled as proof that a few houseplants will purify your living room. The original research used sealed chambers with no air exchange, conditions that bear no relationship to typical homes with normal ventilation. Yet this myth appears in thousands of sources, teaching AI to confidently repeat fundamentally misapplied science.

Pro Tip: When AI advice, or FB group advice sounds absolute ("always," "never," "all plants require"), assume it reflects internet consensus, not research based facts. The same applies to Google search results; the top-ranking pages are typically the most popular and well-optimised, not the most scientifically accurate.

Hallucinations: When AI Sounds Right But Isn't

In natural language processing research, "hallucination" refers to when models generate plausible-sounding content that is partially or entirely fabricated. Studies examining abstractive summarisation have found substantial rates (Up to 91.4%) of unfaithful content, text that reads smoothly but doesn't accurately reflect source material or reality.

How This Manifests in Plant Care Advice

  • Invented precision: "Optimal" PPFD or vapour pressure deficit (VPD) numbers with no actual source, or figures copied from greenhouse tomato production and applied to medium-light plants in your living room.
  • False certainty: Bold "always" and "never" statements with no qualifiers for light levels, substrate porosity/composition, air movement, or watering methods
  • Citation theatre: References that are incomplete, mismatched to the claim, or entirely non-existent (the more polished the paragraph, the more you should verify its references)
  • Context-free transfers: Greenhouse best practices that assume 12-hour photoperiods, active ventilation, CO₂ supplementation, and fertigation systems applied to windowsill pots

Google AI Overviews are not immune to hallucinations. Despite appearing at the top of search results with Google's branding, these AI-generated summaries can misrepresent source material, combine incompatible information from different sources, or present speculation as fact. The Overview format (concise, confident, prominently displayed) can make fabricated details seem more authoritative than they actually are.

Five Systemic Failure Modes

1. Context Collapse

AI often responds as if all homes, seasons, light intensities, and growing media were identical. Complex, multi-variable growing conditions get flattened into single, averaged suggestions that may not apply to your specific situation.

Example: Recommending weekly watering for a succulent without considering whether you have 15% or 60% relative humidity, whether the plant receives 50 or 200 µmol/m²/s of light, or whether the substrate is 80% pumice or 80% peat.

Fix it: When querying these AI and Search tools, provide comprehensive environmental data (PPFD at leaf level, air and leaf temperatures, relative humidity, substrate composition by volume, pot size, and current watering regime).

2. False Precision and Borrowed Numbers

AI can generate specific-sounding numbers that are either fabricated or borrowed from unrelated plants, crops or contexts. It looks scientific, but lacks the specificity needed for your situation.

Fix it: Ask for ranges rather than single values. Request sources with full experimental context (what species, what environment, what light levels, what objective). Default to university extension publications for baseline guidance whenever possible. They are not perfect, but usually a lot more accurate than a typical blog incentivised by endless on-page advertising, filled with hastily AI written articles.

3. Myth Amplification Through Repetition

Because these large language models (LLMs) optimise for next-word probability based on training frequency, myths that are ubiquitous online like the benefits of cinnamon or silica's ability to help all plants become attractors in the model's probability space. The more a claim is repeated, regardless of accuracy, the more confident the AI becomes in reproducing it.

Common amplified myths:

Fix it: Ask the AI tool/model to present competing hypotheses with evidence for each and what would falsify them. This forces more rigorous reasoning.

Myth Check: When AI "balances" two conflicting views, it isn't weighing evidence; it's averaging text probabilities.

4. The Indoor vs. Greenhouse Transfer Problem

Greenhouse vs. Indoor Growing Conditions
Factor Greenhouse Indoor Reality
Light 300–800+ µmol/m²/s 50–150 µmol/m²/s (typical)
Air Movement Active fans, ventilation Stagnant, minimal exchange
VPD Controlled, stable Erratic due to HVAC and seasonal variation
Substrate Mature media, active biology Soilless mixes, limited microbial life
Irrigation Fertigation systems, precisely measured Manual watering, inconsistent
Table: Environmental contrasts between greenhouse and indoor growing conditions.

Recommendations for optimised and automated light, airflow, humidity, watering, and nutrient growing systems will systematically overestimate a variety of needs compared to typical indoor conditions. This does not mean the recommendations don't have value, but you need to know what and and how to adapt conditions for the information to be most advantageous.

Fix it: Explicitly translate commercial guidance to indoor constraints. If advice assumes 400+ µmol/m²/s and you're only providing 100 μmol/m2/s, then you need to proportionally reduce things like fertiliser rates and soilless porosity appropriately. Guessing with things like using a "chunky mix" is a poor approach to long-term plant or garden success.

Pro Tip: Academic research prioritises crops, turf, and landscape plants, not indoor tropicals. AI has almost no reliable reference material for houseplant care under typical home conditions.

5. Species-Level Confusion and Regional Blindness

Home plant and gardening content routinely uses common names loosely, while scientific research employs precise botanical nomenclature. This can create confusing identification problems:

  • "Jade plant" encompasses multiple Crassula species with different water, light, and temperature requirements
  • "Palm" includes hundreds of species across different families with dramatically different care needs
  • "Fern" ranges from xeric species to obligate wetland plants

Similarly, most research papers for outdoor gardening specify USDA hardiness zones and regional conditions. Blog posts rarely do. AI might suggest outdoor planting calendars that work in zone 8a/b (like mine) but fail catastrophically in zone 4, or pest management for insects that don't exist in your bioregion.

Fix it: Use scientific names whenever possible, and always include your location, USDA zone, or local climate characteristics when asking region-dependent questions. This mostly applies to outdoor plant care related to survivability in outdoor winter temperatures, and not indoor plants from the tropics.

Real-World Trouble Spots

Fertilisation Advice

AI often suggests fertilisation schedules, marketing materials, and biased product ratios based on general blog advice rather than understanding:

Pest and Disease Diagnosis

Visual identification of plant problems requires expertise that goes far beyond simple pattern matching:

  • Multiple problems can produce similar symptoms (nutrient deficiency vs. root anoxia)
  • Environmental stress often mimics disease
  • Regional pest identification requires local knowledge (fungus gnat vs. shore fly)
  • Treatment recommendations need to consider the specific pathogen or pest, not just visual symptoms

AI trained on image captions and blog descriptions may confidently misdiagnose problems. True diagnosis often requires microscopic examination, context, and sometimes lab testing, none of which AI can do.

Propagation and Pruning

While AI can describe common propagation techniques, it often fails to specify success rates, seasonal timing requirements, hormone treatment needs, or post-propagation care that differs from mature plant care.

For pruning, generic "cut back by one-third" advice ignores species-specific growth patterns, timing relative to flowering and dormancy, proper cut placement relative to nodes and buds, and sterilisation requirements. Incorrect pruning can permanently damage or kill plants.

The Echo Chamber Effect

As AI-generated content proliferates online, a concerning feedback loop emerges. Newer models increasingly train on content that was itself created by earlier AI systems. This means:

  1. Myth amplification accelerates as AI-generated repetitions reinforce AI training
  2. Nuance erosion as information gets repeatedly simplified and averaged
  3. Error fossilisation where mistakes in AI content become "facts" for the next generation
  4. Signal-to-noise ratio degradation over successive AI training cycles

Google AI Overviews intensify this feedback loop. When PHA group members and other gardeners search for plant care advice, they now encounter AI-generated summaries at the top of results, before seeing any actual sources. These Overviews appear authoritative, leading users to trust them without clicking through to verify. As these AI summaries get scraped, referenced, and incorporated into new blog posts and articles, they become part of the training data for future AI models. Google's search dominance means its AI-generated content has outsized influence on what becomes "common knowledge" online, even when that knowledge is flawed.

This suggests the reliability problem may worsen before it improves, unless training data is actively curated for accuracy rather than mere availability. It's a big part of why this kind of data and advice is not allowed in the Plant Hoarders Anonymous (PHA) group and its rules.

What AI Actually Does Well

Despite these many limitations, AI has genuine strengths when properly constrained:

  • Rapid synthesis of known reliable sources once you direct it to them
  • Concept explanation: Translating technical terms like VPD, air-filled porosity, cation exchange capacity, and photosynthetic compensation points into accessible language
  • Diagnostic brainstorming: Listing possible causes for symptoms sorted by likelihood when you provide detailed measurements and images
  • Template generation: Creating care logs, trial protocols, observation schedules, and side-by-side comparison tables tailored to your space and habits
  • Literature search: Identifying relevant research papers and extension bulletins when given specific parameters

The key is treating AI as a research accelerator, not a replacement for measurement, observation, and validated sources.

Pro Tip: Use AI as an assistant, not an authority. Let it help draft summaries, provide reminders, and generate plant/experiment logs, not dictate direct care or growing practices.

Your Evidence-Based Workflow

1. Instrument Your Growing Environment

Before asking AI for advice, gather objective data:

2. Start With Extension Service Baselines

University extension publications typically provide conservative, research-backed starting points for most common plants. These are generally written for non-commercial growers and account for variable home conditions. They aren't anywhere close to perfect, but they are generally better than a blog based solely on attracting a revenue-based audience for a hobbyist or plant influencer.

3. Be Specific to Be Terrific

This phrase is often used with BNI (Business Network International) members as it relates to networking opportunities, but it applies to all sorts of things like goal setting, and AI Prompt Engineering.

Essentially, don't ask basic questions. Instead ask something like... "My Philodendron bipinnifolium receives 85 µmol/m²/s at leaf level for 10 hours daily, air temp 21°C, RH 45%, potted in 60% coco coir / 30% perlite / 10% worm castings, 15cm pot, currently watering approximately 200ml every 5 days. Leaves show interveinal chlorosis. List top three likely causes with mechanisms and what to measure next."

What you might get back using the example prompt above...

Philodendron bipinnifolium — Interveinal Chlorosis: Top Likely Causes & What to Check

  1. Iron (Fe) deficiency from elevated media or irrigation pH
    • Mechanism: Fe is immobile; when pH drifts above ~6.2 in soilless media (coco tends to rise with alkaline water and worm castings), Fe becomes unavailable. Newest leaves yellow between veins first.
    • Measure next: Media pH (target 5.8–6.2) via 1:2 slurry, runoff pH, source-water alkalinity (ppm as CaCO₃), Fe chelate type on your fertilizer (EDDHA/DTPA hold up better than EDTA), symptom location (newest leaves confirm Fe immobility).
  2. Magnesium (Mg) deficiency or K→Mg antagonism in coco
    • Mechanism: Coco coir adsorbs Ca/Mg; high K in feed suppresses Mg uptake. Mg is mobile, so older leaves show interveinal chlorosis first, with margins paling before veins.
    • Measure next: Check which leaves are affected (older = Mg), feed and runoff EC, Ca:Mg:K ratio, whether Cal-Mag supplement is present and its Mg ppm, consider short trial supplying 50–75 ppm Mg.
  3. Manganese (Mn) deficiency or micro uptake inhibition from salts/root stress
    • Mechanism: Mn availability falls as pH rises; appears as fine mottling or tiny necrotic specks. Salt buildup or low oxygen in roots further limit micronutrient uptake.
    • Measure next: Media and runoff pH again, runoff EC (flush if high), inspect for mottling vs smooth yellowing, check root health and evenness of moisture—200 mL every 5 days may cause dry pockets in a 15 cm pot.

Quick context signals

  • Light/DLI: 85 µmol m⁻² s⁻¹ × 10 h ≈ 3.1 mol m⁻² day⁻¹ (low). Doesn’t directly cause chlorosis but makes leaves appear paler and slows recovery.
  • VPD: At 69.8 °F and 45 % RH ≈ 1.37 kPa — a bit high for aroids, so Ca/Mg/Fe shortfalls manifest faster.

Fast triage while you measure

  • Confirm symptom age to separate Fe (new) vs Mg (old).
  • Test and correct media/runoff pH to 5.8–6.2 and verify water alkalinity.
  • Use balanced feed with full micros + Cal-Mag; aim 50–75 ppm Mg and Fe chelate stable at your pH.
  • If runoff EC is high, flush 2–3× pot volume with pH-adjusted water then resume feeding.
  • For recovery, raise light to 150–200 µmol m⁻² s⁻¹ and RH to 55–60 % to ease VPD.

What you might get using a more generic request...

I think my Philodendron bipinnifolium might have interveinal chlorosis. What can I do?

  • Test your media and runoff pH. If above 6.2, iron and manganese become less available. Aim for pH 5.8–6.2.
  • Check your water source for alkalinity and hardness. High bicarbonates can gradually raise coco pH.
  • Ensure your fertilizer includes a good iron chelate (DTPA or EDDHA). Basic EDTA types stop working at higher pH.
  • Add or verify Cal-Mag supplementation. Coco binds calcium and magnesium, so deficiencies are common.
  • If salts have built up, flush with 2–3× pot volume of pH-balanced water, then resume feeding.
  • Adjust your environment: raise humidity to 55–60%, and provide moderate light (~150 µmol·m⁻²·s⁻¹) for steadier growth.

Pro Tip: Identify whether yellowing starts on new or old leaves, new growth points to iron deficiency, older leaves suggest magnesium issues.

4. Demand Sources and Experimental Context

When AI makes specific claims, require:

  • The actual source (paper title, authors, publication)
  • A cited link for direct reference
  • The situational setting (greenhouse, grow tent, field study)
  • The specific species tested
  • Light levels and environmental parameters used

If it cannot provide a real, relevant, accessible source, treat the claim as questionable, a hypothesis rather than fact, or a complete hallucination.

5. Make Changes Gradually and Track Results

When AI suggests improvements that you feel are fine-tuned and filtered, resist the urge to change everything at once. Adjust one factor at a time and give it 2 to 4 weeks before making another change. Take photos and keep simple notes about what you changed and when. This way, if things improve (or worsen), you'll know better what actually made the difference.

Start with the biggest impact changes first:

  1. Light (move the plant closer to a window or add a quality grow light if needed)
  2. Watering schedule (adjust how often or how much you water)
  3. Fertiliser (change the amount or frequency)
  4. Soilless Grow mix (only when repotting is absolutely necessary)

If you have multiple plants of the same type, you can try a change on one while keeping the other(s) as is. This helps you see whether the change actually helped or if something else was going on.

6. Use AI for Communication, Not Conclusions

Have AI draft your observation logs, generate comparison tables, and format any of your trial results. But keep your actual decisions anchored to measurements and validated sources, not the AI's information gathering alone.

Safety-Casing AI Recommendations

Before acting on any AI-generated advice, pass it through these verification gates:

1. Source Integrity
Is there a real, relevant citation you can access and read? Does it actually say what the AI claims?

2. Context Match
Does the study environment match your setup closely enough to have any actual bearing? Greenhouse trials rarely apply directly to windowsills, although aspects of the study may have some bearing.

3. Measurement Compatibility
Do your PPFD, temperature, RH, pot size, and substrate support the recommendation, or would they need adjustment? Many of these are fairly easy to adapt, lighting and the grow mix substrate will almost always be the hardest to replicate as one requires changing your light(s) or at least its coverage and intensity, and grow mix porosity often requires a complete repotting (try not to as it's very stressful)

4. Failure Impact Assessment
If the suggested advice is wrong, what's the damage and recovery path? Opt for low-risk advice and suggestions first. If changes to your lighting are included, do these first. They affect all aspects of a plant's biological response.

5. Monitoring Plan
What will you measure, how often, and what threshold triggers a rollback or modification?

Pro Tip: Ask your AI tool of choice to build a one-page "care change safety plan" including success metrics, rollback criteria, and a photo schedule.

Here's an example of what that might look like...

Care Change Safety Plan — Philodendron bipinnifolium

Goal: correct interveinal chlorosis with minimal stress using measured, reversible steps.

Baseline (Day 0)

  • Leaf count; note which are chlorotic (new vs old).
  • Runoff pH/EC (target pH 5.8–6.2).
  • Water pH + alkalinity (ppm as CaCO₃).
  • Light (85 µmol·m⁻²·s⁻¹ × 10 h), Temp/RH (21 °C / 45 %).
  • Media age and last repot.

Intervention Timeline

  • Week 1: Flush 2–3× pot volume at pH 5.8; resume ¼–½ strength feed with full micros, Fe (DTPA/EDDHA) + 50–75 ppm Mg.
  • Week 2: Water by dryness/weight (~top 3 cm dry); keep pH 5.8–6.2.
  • Week 3: Retest runoff pH/EC; if pH > 6.2, mildly acidify feed.
  • Week 4: Nudge environment: 150 µmol·m⁻²·s⁻¹, RH 55–60 %.

Success Metrics (within 4–6 weeks)

  • New leaves fully green; no interveinal chlorosis.
  • Runoff pH stable 5.8–6.2 on 3 consecutive tests.
  • Runoff EC ~1.0–1.5 mS/cm after feeds (no rising trend).
  • Older leaves stable (no rapid necrosis); roots odor-neutral.

Rollback Criteria & Steps

  • Trigger: Faster yellowing on new + old leaves, distorted new growth, EC > 2.0 mS/cm, pH > 6.5, wilting/anaerobic smell.
  • Action: Pause changes → single flush 2× volume @ pH 5.8 → return to prior regimen 7 days → re-measure pH/EC → reassess.

Photo Schedule

  • Day 0: Full plant, newest leaf close-up, oldest leaf close-up.
  • Day 7: Post-flush check (same angles, same lighting).
  • Day 14 & 21: Colour/vein clarity and any new leaf.
  • Day 30 & 45–60: Compare old vs new growth; final outcome.

Quick Logging Template

  • Date • Runoff pH/EC • Water pH/alk • Volume • T/RH • Notes on colour/turgor.

Practical Prompt Templates

The key to getting better answers from AI is being specific about what you need and holding it accountable for its claims. Generic questions like "why are my plant's leaves yellowing?" will get you generic, potentially misleading answers. But when you provide actual measurements and environmental details, you force the AI to reason with real data instead of falling back on internet myths.

These templates are designed to extract more reliable, evidence-based responses. Copy them, fill in your specific information, and paste them into any AI system (ChatGPT, Claude, Google's Gemini, etc.). The structure pushes the AI toward citations, encourages it to acknowledge uncertainty, and reduces the chance of hallucinated advice. Think of these as training wheels that help you ask better questions until it becomes second nature.

Diagnostic Prompt

You are a horticultural consultant. Use only evidence-based reasoning.

Plant: [species, cultivar if known]
PPFD at leaf: [xx] µmol/m²/s for [xx] hours daily
Air temp: [xx]°C, Leaf temp: [xx]°C, RH: [xx]%
Substrate: [composition by volume]
Pot: [volume, material, drainage]
Watering: [volume, frequency, method]
Fertiliser: [type, N-P-K ratio, rate, frequency]
Symptoms: [detailed description]

List the top 3 likely causes with physiological mechanisms. 
For each, specify what to measure next.
Cite sources for each claim or mark as hypothesis.

Trial Design Prompt

Design a 4-week A/B trial to test [specific variable change].
Keep all other factors constant.
Specify:
- Daily measurements required
- Weekly photo documentation angles
- Success/failure criteria with thresholds
- Rollback triggers if problems emerge
Include a simple table template for logging results.

Source Verification Prompt

For each specific claim in your last response:
1. Provide full citation (authors, title, journal/publication, year)
2. Describe the experimental context (crop, growing environment, light level)
3. Note any limitations or constraints on generalising the findings

If no source exists, explicitly mark the claim as hypothesis.

Red Flags in AI Plant Advice

Watch for these warning signs that suggest unreliable information:

  • Absolute language without qualifiers: "always," "never," "all plants require"
  • No environmental context: Generic advice that could theoretically apply to any plant anywhere
  • Missing source attribution for specific numerical claims
  • Recommendations contradicting extension service guidelines without explanation
  • Failure to account for light levels in fertiliser, watering advice, pretty much anything.
  • Single-source reasoning presented as settled science, especially from any personal or influencer based blog, video transcript, etc.
  • Advice that ignores changes in photoperiod and environmental conditions
  • Vague species naming using only common names like "palm," "fern," or "snake plant"

Pro Tip: Google AI Overviews deserve the same scepticism as any other AI-generated content. The fact that they appear at the top of Google search results doesn't mean they're more accurate; it just means they're more visible. Always click through to the actual sources and verify claims independently, even when Google's AI presents them as authoritative summaries.

The Path Forward

  • Treat AI as a research assistant, not an oracle (verify critical recommendations independently, whether they come from Chat GPT, Google AI Overviews, or any other AI system)
  • Don't conflate search ranking with accuracy (the top results in Google search are optimised for popularity and SEO, not scientific validity. Extension service bulletins often rank lower than blog posts but contain far better information)
  • Provide comprehensive context in every query to reduce hallucination risk
  • Demand sources and check that they're real, relevant, and correctly interpreted (click through AI Overview citations to verify they actually support the claims being made)
  • Start with extension services for baseline guidance, then use AI to refine and customise
  • Document and learn from your own observations (your specific conditions matter more than averaged advice)

Conclusion

AI has genuine potential as a horticultural tool for everyone, but only when we recognise its current limitations and actively work with and around their failings. The training data problem is real: models learn from what's abundant online rather than what's scientifically sound. They hallucinate plausible-sounding details. They often transfer lab-based research to our living rooms without accounting for the profound environmental differences. And they amplify whatever myths are most common in their training data.

But none of this means AI is useless. It means AI needs to be managed and constrained: given good sources to work from, provided with specific measurements to reason about, required to show its work process and steps to a conculsion, and treated as one tool among many, not the final authority.

Our plants and gardens remain a place where theory meets reality. No language model can replace careful observation, systematic measurement, and the ongoing patient accumulation of knowledge about how plants respond in your specific conditions - my personal journey started in earnest when I got my first plant at about age 5 (Mexican Hat Plant), I'm 58 now, so that's 53 years of learning, and I'm leaning new stuff daily. It never ends. AI should accelerate the learning process, not completely be a substitute for it.

Start with measurements, anchor to validated sources, test changes incrementally, and log what actually happens. Use AI to organise and analyse that information, but let the plants themselves, and your own rigorous research and passion for plants, remain your primary teachers.

The best (unlikely) gardener is still the one who observes their plants closely, measures rather than guesses, and treats confident-sounding advice with appropriate scepticism, whether it comes from a chatbot, yours truly, or other blog post like this.

Remember, the more you know, the more you grow. Your plants too.

The Unlikely Gardener aka, Kyle Bailey
Kyle Bailey is the founder of UnlikelyGardener.com, where science meets soil. He also runs the wildly popular Facebook community Plant Hoarders Anonymous (PHA), home to ~311,000 plant lovers sharing real talk and real results. When Kyle’s not knee-deep in horticultural research or myth-busting bad plant advice, he’s leading two marketing agencies— City Sidewalk Marketing, which supports local small businesses, and Blue Square Marketing, focused on the skilled trades. He’s also a proud dad, grandfather (affectionately referred to as Grumpy), and a dog daddy to three pit bull mix rescues—including one 165-pound lap dog who hasn’t gotten the memo.

Life is busy. Give Us Your Name & Email and We'll Send You Content.

This field is for validation purposes and should be left unchanged.

Got Something to Say?