Best Flutter Packages for AI Integration (OpenAI, Gemini, TensorFlow Lite)
Best Flutter Packages for AI Integration (OpenAI, Gemini, TensorFlow Lite)
You want to add AI features to your Flutter app - chatbots, image generation, text analysis, or on-device machine learning. But where do you start? Which packages should you use? How do you integrate OpenAI, Google Gemini, or TensorFlow Lite?
AI integration in Flutter is a growing trend in 2026. More and more apps are adding AI features, and developers need reliable packages to integrate these capabilities. The good news: Flutter has excellent packages for all major AI services.
Quick Answer: Use openai_dart for OpenAI (ChatGPT, DALL-E), google_generative_ai for Google Gemini, tflite_flutter for TensorFlow Lite (on-device ML), and http for custom API integrations. For chatbots, use flutter_chat_ui with AI APIs. For image generation, use cached_network_image with DALL-E or Stable Diffusion APIs.
This comprehensive guide covers the best Flutter packages for AI integration in 2026, with practical code examples and implementation guides.
Overview: AI in Flutter
Types of AI Integration
- Cloud AI Services - OpenAI, Google Gemini, Anthropic Claude
- On-Device ML - TensorFlow Lite, Core ML
- Computer Vision - Image recognition, object detection
- Natural Language Processing - Text analysis, sentiment analysis
- Speech Recognition - Voice input, transcription
Popular Use Cases
- Chatbots - Customer support, virtual assistants
- Image Generation - DALL-E, Stable Diffusion, Midjourney
- Text Analysis - Sentiment analysis, summarization
- Translation - Real-time translation
- Voice Assistants - Voice commands, transcription
- Image Recognition - Object detection, face recognition
- Recommendation Systems - Personalized recommendations
1. OpenAI Integration (ChatGPT, DALL-E, Whisper)
Package: openai_dart
Package: openai_dart
Pub.dev: https://pub.dev/packages/openai_dart
Stars: 1,000+
Maintenance: Active
Documentation: Excellent
Features:
- ✅ ChatGPT (GPT-4, GPT-3.5)
- ✅ DALL-E (image generation)
- ✅ Whisper (speech-to-text)
- ✅ Embeddings
- ✅ Fine-tuning support
- ✅ Stream responses
Installation
# pubspec.yaml
dependencies:
openai_dart: ^0.3.0
Code Example: ChatGPT Integration
import 'package:openai_dart/openai_dart.dart';
class OpenAIService {
late final OpenAIClient _client;
OpenAIService() {
_client = OpenAIClient(
apiKey: 'your-api-key-here',
);
}
Future<String> chatCompletion(String prompt) async {
try {
final response = await _client.chat.create(
model: 'gpt-4',
messages: [
ChatMessage(
role: ChatRole.user,
content: prompt,
),
],
);
return response.choices.first.message.content ?? '';
} catch (e) {
print('Error: $e');
return 'Error: $e';
}
}
Stream<String> chatStream(String prompt) async* {
try {
final stream = _client.chat.createStream(
model: 'gpt-4',
messages: [
ChatMessage(
role: ChatRole.user,
content: prompt,
),
],
);
await for (final chunk in stream) {
final content = chunk.choices.first.delta.content;
if (content != null) {
yield content;
}
}
} catch (e) {
print('Error: $e');
}
}
}
// Usage in widget
class ChatWidget extends StatefulWidget {
@override
_ChatWidgetState createState() => _ChatWidgetState();
}
class _ChatWidgetState extends State<ChatWidget> {
final _openAI = OpenAIService();
final _messages = <ChatMessage>[];
final _controller = TextEditingController();
bool _loading = false;
Future<void> _sendMessage() async {
final userMessage = _controller.text;
if (userMessage.isEmpty) return;
setState(() {
_messages.add(ChatMessage(role: ChatRole.user, content: userMessage));
_loading = true;
_controller.clear();
});
final response = await _openAI.chatCompletion(userMessage);
setState(() {
_messages.add(ChatMessage(role: ChatRole.assistant, content: response));
_loading = false;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text('ChatGPT')),
body: Column(
children: [
Expanded(
child: ListView.builder(
itemCount: _messages.length,
itemBuilder: (context, index) {
final message = _messages[index];
return ListTile(
title: Text(message.content),
subtitle: Text(message.role.name),
);
},
),
),
if (_loading) LinearProgressIndicator(),
Padding(
padding: EdgeInsets.all(8.0),
child: Row(
children: [
Expanded(
child: TextField(
controller: _controller,
decoration: InputDecoration(hintText: 'Type a message...'),
),
),
IconButton(
icon: Icon(Icons.send),
onPressed: _sendMessage,
),
],
),
),
],
),
);
}
@override
void dispose() {
_controller.dispose();
super.dispose();
}
}
Code Example: DALL-E Image Generation
class DALLEService {
late final OpenAIClient _client;
DALLEService() {
_client = OpenAIClient(apiKey: 'your-api-key-here');
}
Future<String?> generateImage(String prompt) async {
try {
final response = await _client.images.create(
prompt: prompt,
n: 1,
size: ImageSize.size1024x1024,
);
return response.data.first.url;
} catch (e) {
print('Error: $e');
return null;
}
}
}
// Usage
class ImageGeneratorWidget extends StatefulWidget {
@override
_ImageGeneratorWidgetState createState() => _ImageGeneratorWidgetState();
}
class _ImageGeneratorWidgetState extends State<ImageGeneratorWidget> {
final _dalle = DALLEService();
final _promptController = TextEditingController();
String? _generatedImageUrl;
bool _loading = false;
Future<void> _generateImage() async {
final prompt = _promptController.text;
if (prompt.isEmpty) return;
setState(() {
_loading = true;
_generatedImageUrl = null;
});
final imageUrl = await _dalle.generateImage(prompt);
setState(() {
_generatedImageUrl = imageUrl;
_loading = false;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text('DALL-E Image Generator')),
body: Column(
children: [
Padding(
padding: EdgeInsets.all(16.0),
child: TextField(
controller: _promptController,
decoration: InputDecoration(
hintText: 'Describe the image you want...',
border: OutlineInputBorder(),
),
),
),
ElevatedButton(
onPressed: _loading ? null : _generateImage,
child: _loading
? CircularProgressIndicator()
: Text('Generate Image'),
),
if (_generatedImageUrl != null)
Expanded(
child: Image.network(_generatedImageUrl!),
),
],
),
);
}
@override
void dispose() {
_promptController.dispose();
super.dispose();
}
}
Pros and Cons
Pros:
- Comprehensive OpenAI API support
- Stream responses for real-time chat
- Type-safe API
- Good documentation
- Active maintenance
Cons:
- Requires API key (costs money)
- Rate limits apply
- Internet connection required
2. Google Gemini Integration
Package: google_generative_ai
Package: google_generative_ai
Pub.dev: https://pub.dev/packages/google_generative_ai
Stars: 500+
Maintenance: Active (Official Google package)
Documentation: Excellent
Features:
- ✅ Gemini Pro models
- ✅ Text generation
- ✅ Image analysis (Gemini Vision)
- ✅ Multimodal prompts
- ✅ Streaming support
- ✅ Free tier available
Installation
dependencies:
google_generative_ai: ^0.2.0
Code Example: Gemini Chat
import 'package:google_generative_ai/google_generative_ai.dart';
class GeminiService {
late final GenerativeModel _model;
GeminiService() {
_model = GenerativeModel(
model: 'gemini-pro',
apiKey: 'your-api-key-here',
);
}
Future<String> generateText(String prompt) async {
try {
final content = [Content.text(prompt)];
final response = await _model.generateContent(content);
return response.text ?? 'No response';
} catch (e) {
print('Error: $e');
return 'Error: $e';
}
}
Stream<String> generateStream(String prompt) async* {
try {
final content = [Content.text(prompt)];
final response = _model.generateContentStream(content);
await for (final chunk in response) {
final text = chunk.text;
if (text != null) {
yield text;
}
}
} catch (e) {
print('Error: $e');
}
}
Future<String> analyzeImage(String imagePath, String prompt) async {
try {
final imageBytes = await File(imagePath).readAsBytes();
final image = await decodeImageFromList(imageBytes);
final content = [
Content.multi([
TextPart(prompt),
DataPart('image/jpeg', imageBytes),
])
];
final response = await _model.generateContent(content);
return response.text ?? 'No response';
} catch (e) {
print('Error: $e');
return 'Error: $e';
}
}
}
Pros and Cons
Pros:
- Official Google package
- Free tier available
- Multimodal support (text + images)
- Good performance
- Active development
Cons:
- Newer than OpenAI (smaller community)
- Limited model options compared to OpenAI
- API key required
3. TensorFlow Lite (On-Device ML)
Package: tflite_flutter
Package: tflite_flutter
Pub.dev: https://pub.dev/packages/tflite_flutter
Stars: 800+
Maintenance: Active
Documentation: Good
Features:
- ✅ On-device inference
- ✅ Image classification
- ✅ Object detection
- ✅ Text classification
- ✅ Custom models
- ✅ No internet required
Installation
dependencies:
tflite_flutter: ^0.10.0
Code Example: Image Classification
import 'package:tflite_flutter/tflite_flutter.dart';
import 'package:image/image.dart' as img;
class TensorFlowLiteService {
Interpreter? _interpreter;
List<String>? _labels;
Future<void> loadModel() async {
try {
_interpreter = await Interpreter.fromAsset('model.tflite');
_labels = await _loadLabels();
} catch (e) {
print('Error loading model: $e');
}
}
Future<List<String>> _loadLabels() async {
final labelData = await rootBundle.loadString('assets/labels.txt');
return labelData.split('\n');
}
Future<String> classifyImage(String imagePath) async {
if (_interpreter == null) {
await loadModel();
}
try {
// Load and preprocess image
final imageBytes = await File(imagePath).readAsBytes();
final image = img.decodeImage(imageBytes);
final resized = img.copyResize(image!, width: 224, height: 224);
// Convert to input format
final input = _imageToByteList(resized);
// Run inference
final output = List.filled(1000, 0.0).reshape([1, 1000]);
_interpreter!.run(input, output);
// Get top result
final index = output[0].indexOf(output[0].reduce((a, b) => a > b ? a : b));
return _labels![index];
} catch (e) {
print('Error: $e');
return 'Error';
}
}
List<List<List<List<double>>>> _imageToByteList(img.Image image) {
final convertedBytes = Float32List(1 * 224 * 224 * 3);
final buffer = Float32List.view(convertedBytes.buffer);
int pixelIndex = 0;
for (int i = 0; i < 224; i++) {
for (int j = 0; j < 224; j++) {
final pixel = image.getPixel(j, i);
buffer[pixelIndex++] = (img.getRed(pixel) / 255.0);
buffer[pixelIndex++] = (img.getGreen(pixel) / 255.0);
buffer[pixelIndex++] = (img.getBlue(pixel) / 255.0);
}
}
return convertedBytes.reshape([1, 224, 224, 3]);
}
}
Pros and Cons
Pros:
- On-device inference (privacy)
- No internet required
- Fast inference
- Free to use
- Custom models supported
Cons:
- Requires model training/knowledge
- Model size limitations
- Setup complexity
- Limited to model capabilities
4. Speech Recognition
Package: speech_to_text
Package: speech_to_text
Pub.dev: https://pub.dev/packages/speech_to_text
Stars: 1,500+
Maintenance: Active
Documentation: Good
Features:
- ✅ Speech-to-text conversion
- ✅ Multiple languages
- ✅ Real-time transcription
- ✅ On-device processing
- ✅ Permission handling
Installation
dependencies:
speech_to_text: ^6.0.0
Code Example: Voice Input
import 'package:speech_to_text/speech_to_text.dart';
class SpeechRecognitionService {
final SpeechToText _speech = SpeechToText();
bool _isAvailable = false;
Future<bool> initialize() async {
_isAvailable = await _speech.initialize();
return _isAvailable;
}
void startListening(Function(String) onResult) {
if (!_isAvailable) return;
_speech.listen(
onResult: (result) {
if (result.finalResult) {
onResult(result.recognizedWords);
}
},
);
}
void stopListening() {
_speech.stop();
}
bool get isListening => _speech.isListening;
}
5. Text-to-Speech
Package: flutter_tts
Package: flutter_tts
Pub.dev: https://pub.dev/packages/flutter_tts
Stars: 600+
Maintenance: Active
Documentation: Good
Features:
- ✅ Text-to-speech conversion
- ✅ Multiple languages
- ✅ Voice selection
- ✅ Speed and pitch control
- ✅ On-device processing
Installation
dependencies:
flutter_tts: ^4.0.0
Code Example: Text-to-Speech
import 'package:flutter_tts/flutter_tts.dart';
class TextToSpeechService {
late FlutterTts _tts;
TextToSpeechService() {
_tts = FlutterTts();
_initialize();
}
Future<void> _initialize() async {
await _tts.setLanguage('en-US');
await _tts.setSpeechRate(0.5);
await _tts.setVolume(1.0);
await _tts.setPitch(1.0);
}
Future<void> speak(String text) async {
await _tts.speak(text);
}
Future<void> stop() async {
await _tts.stop();
}
Future<List<dynamic>> getVoices() async {
return await _tts.getVoices;
}
}
6. Image Recognition (ML Kit)
Package: google_mlkit_* (various packages)
Packages:
google_mlkit_face_detectiongoogle_mlkit_text_recognitiongoogle_mlkit_barcode_scanninggoogle_mlkit_image_labelinggoogle_mlkit_object_detection
Features:
- ✅ Face detection
- ✅ Text recognition (OCR)
- ✅ Barcode scanning
- ✅ Image labeling
- ✅ Object detection
- ✅ On-device processing
Installation
dependencies:
google_mlkit_text_recognition: ^0.11.0
google_mlkit_face_detection: ^0.10.0
Code Example: Text Recognition (OCR)
import 'package:google_mlkit_text_recognition/google_mlkit_text_recognition.dart';
class OCRService {
final TextRecognizer _recognizer = TextRecognizer();
Future<String> recognizeText(String imagePath) async {
try {
final inputImage = InputImage.fromFilePath(imagePath);
final recognizedText = await _recognizer.processImage(inputImage);
return recognizedText.text;
} catch (e) {
print('Error: $e');
return 'Error';
} finally {
_recognizer.close();
}
}
}
7. Chat UI for AI Apps
Package: flutter_chat_ui
Package: flutter_chat_ui
Pub.dev: https://pub.dev/packages/flutter_chat_ui
Stars: 1,000+
Maintenance: Active
Documentation: Good
Features:
- ✅ Beautiful chat UI
- ✅ Message bubbles
- ✅ Typing indicators
- ✅ File attachments
- ✅ Customizable
Installation
dependencies:
flutter_chat_ui: ^1.6.0
Code Example: AI Chat UI
import 'package:flutter_chat_ui/flutter_chat_ui.dart';
import 'package:flutter_chat_types/flutter_chat_types.dart' as types;
class AIChatWidget extends StatefulWidget {
@override
_AIChatWidgetState createState() => _AIChatWidgetState();
}
class _AIChatWidgetState extends State<AIChatWidget> {
final List<types.Message> _messages = [];
final _user = types.User(id: 'user');
final _openAI = OpenAIService();
void _handleSendPressed(types.PartialText message) {
final textMessage = types.TextMessage(
author: _user,
createdAt: DateTime.now().millisecondsSinceEpoch,
id: DateTime.now().millisecondsSinceEpoch.toString(),
text: message.text,
);
setState(() {
_messages.insert(0, textMessage);
});
// Get AI response
_openAI.chatCompletion(message.text).then((response) {
final aiMessage = types.TextMessage(
author: types.User(id: 'ai'),
createdAt: DateTime.now().millisecondsSinceEpoch,
id: DateTime.now().millisecondsSinceEpoch.toString(),
text: response,
);
setState(() {
_messages.insert(0, aiMessage);
});
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text('AI Chat')),
body: Chat(
messages: _messages,
onSendPressed: _handleSendPressed,
user: _user,
),
);
}
}
Complete Integration Example
AI-Powered Chat App
import 'package:flutter/material.dart';
import 'package:openai_dart/openai_dart.dart';
import 'package:flutter_chat_ui/flutter_chat_ui.dart';
import 'package:flutter_chat_types/flutter_chat_types.dart' as types;
class AIChatApp extends StatefulWidget {
@override
_AIChatAppState createState() => _AIChatAppState();
}
class _AIChatAppState extends State<AIChatApp> {
final OpenAIService _openAI = OpenAIService();
final List<types.Message> _messages = [];
final _user = types.User(id: 'user');
bool _loading = false;
Future<void> _sendMessage(String text) async {
final userMessage = types.TextMessage(
author: _user,
createdAt: DateTime.now().millisecondsSinceEpoch,
id: DateTime.now().millisecondsSinceEpoch.toString(),
text: text,
);
setState(() {
_messages.insert(0, userMessage);
_loading = true;
});
final response = await _openAI.chatCompletion(text);
final aiMessage = types.TextMessage(
author: types.User(id: 'ai'),
createdAt: DateTime.now().millisecondsSinceEpoch,
id: DateTime.now().millisecondsSinceEpoch.toString(),
text: response,
);
setState(() {
_messages.insert(0, aiMessage);
_loading = false;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text('AI Chat Assistant')),
body: Chat(
messages: _messages,
onSendPressed: (message) => _sendMessage(message.text),
user: _user,
showUserAvatars: true,
showUserNames: true,
inputOptions: InputOptions(
sendButtonVisibilityMode: SendButtonVisibilityMode.always,
),
),
);
}
}
Best Practices
1. API Key Management
Never commit API keys to version control:
// ✅ Good - Use environment variables
final apiKey = Platform.environment['OPENAI_API_KEY'] ?? '';
// ✅ Good - Use flutter_dotenv
import 'package:flutter_dotenv/flutter_dotenv.dart';
final apiKey = dotenv.env['OPENAI_API_KEY'] ?? '';
2. Error Handling
Future<String> safeAICall(Future<String> Function() call) async {
try {
return await call();
} on ApiException catch (e) {
return 'API Error: ${e.message}';
} on NetworkException catch (e) {
return 'Network Error: ${e.message}';
} catch (e) {
return 'Unexpected Error: $e';
}
}
3. Rate Limiting
class RateLimitedService {
final _requests = <DateTime>[];
final _maxRequests = 60;
final _timeWindow = Duration(minutes: 1);
Future<bool> canMakeRequest() async {
final now = DateTime.now();
_requests.removeWhere((time) => now.difference(time) > _timeWindow);
if (_requests.length >= _maxRequests) {
return false;
}
_requests.add(now);
return true;
}
}
4. Caching Responses
import 'package:flutter_cache_manager/flutter_cache_manager.dart';
class CachedAIService {
final _cache = DefaultCacheManager();
Future<String> getCachedResponse(String prompt) async {
final cacheKey = _hashPrompt(prompt);
final file = await _cache.getFileFromCache(cacheKey);
if (file != null) {
return await file.file.readAsString();
}
final response = await _makeAPIRequest(prompt);
await _cache.putFile(cacheKey, response.codeUnits);
return response;
}
String _hashPrompt(String prompt) {
return prompt.hashCode.toString();
}
}
Package Comparison Summary
| Package | Use Case | Cost | Internet Required | Setup Complexity |
|---|---|---|---|---|
| openai_dart | ChatGPT, DALL-E | Pay-per-use | Yes | Low |
| google_generative_ai | Gemini, Multimodal | Free tier + paid | Yes | Low |
| tflite_flutter | On-device ML | Free | No | High |
| speech_to_text | Voice input | Free | No | Medium |
| flutter_tts | Text-to-speech | Free | No | Low |
| google_mlkit_* | Image/Text recognition | Free | No | Medium |
Conclusion
Flutter has excellent packages for AI integration in 2026. Here are the top recommendations:
For Chatbots:
- Use
openai_dartorgoogle_generative_aifor AI responses - Use
flutter_chat_uifor beautiful chat UI - Use
speech_to_textfor voice input - Use
flutter_ttsfor voice responses
For Image Generation:
- Use
openai_dartfor DALL-E integration - Use
cached_network_imagefor image caching - Use
image_pickerfor user images
For On-Device ML:
- Use
tflite_flutterfor custom models - Use
google_mlkit_*for pre-built models - Use
camerapackage for real-time processing
Best Practices:
- Secure API keys (use environment variables)
- Handle errors gracefully
- Implement rate limiting
- Cache responses when possible
- Consider costs (cloud APIs cost money)
- Use on-device ML for privacy-sensitive features
Start with simple integrations and gradually add more advanced features. The Flutter AI ecosystem is growing rapidly, and new packages are being added regularly.
Next Steps
- Choose your AI service (OpenAI, Gemini, or on-device)
- Install the appropriate package
- Set up API keys securely
- Start with a simple integration
- Add error handling and caching
- Build your AI-powered Flutter app!
Updated for Flutter 3.24+, 2026 AI packages, and latest API versions