Build A ChatGPT on Flutter using the OpenAI API
ChatGPT (Generative Pre-trained Transformer) is a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI’s GPT-3.5 family of large language models and is fine-tuned with both supervised and reinforcement learning techniques.
ChatGPT was launched as a prototype on November 30. 2022 and quickly gained attention for its detailed responses and articulate answers across many domains of knowledge. its uneven factual accuracy was identified as a significant drawback.
In this article, we’ll learn how to use the OpenAI API to build a ChatGPT application on Flutter.
Building this application we’ll need the following:
- API token: We will need an API token from OpenAI, you can get your API token from the OpenAI Account Dashboard. If you don't have an account you can create one.
- http: http flutter package for handling http request
- provider: The Provider package is an easy-to-use package which is basically a wrapper around inheritedWidget that makes it easy to use and manage. It provides a state management technique that is used to manage a piece of data around the app.
- animated_text_kit: A flutter package that contains a collection of some cool and awesome text animation
- flutter_svg: An SVG rendering and widget library for flutter, which allows planting and displaying Scalable Vector Graphic files.
With all things set, let's start building. 🍾 🍻
Open your terminal and create your flutter app using flutter cli
flutter create openai-chat
When the app has been created, open the folder in your VSCode or any Text Editor you are making use of.
Open the lib folder and open the main file, clear out the initial code which was created with the app, because we are going to start building our app from the ground up.
Your main.dart file will now look like this by creating a Stateful Widget
import 'package:flutter/material.dart';
void main() {
WidgetsFlutterBinding.ensureInitialized();
runApp(const MyApp());
}
class MyApp extends StatefulWidget {
const MyApp({super.key});
@override
State<MyApp> createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: "Open AI Chat",
home: SafeArea(
bottom: true,
top: false,
child: Scaffold(
backgroundColor: const Color(0xff343541),
appBar: AppBar(
backgroundColor: const Color(0xff343541),
leading: IconButton(
onPressed: () {},
icon: const Icon(
Icons.menu,
color: Color(0xffd1d5db),
),
),
elevation: 0,
title: const Text("New Chat"),
centerTitle: true,
actions: [
IconButton(
onPressed: () {},
icon: const Icon(
Icons.add,
color: Color(0xffd1d5db),
),
),
],
),
body: Stack(
[],
),
),
),
);
}
}
So, now that we have our app set up we can start building all the different widgets. We are going to have four (4) different widgets
- User Input Widget
- User message Widget
- AI Message Widget
- Loader Widget
Create a folder called widgets, this will contain all four widgets we will work on soon.
User Input Widget
import 'package:flutter/material.dart';
class UserInput extends StatelessWidget {
final TextEditingController chatcontroller;
const UserInput({
Key? key,
required this.chatcontroller,
}) : super(key: key);
@override
Widget build(BuildContext context) {
return Align(
alignment: Alignment.bottomCenter,
child: Container(
padding: const EdgeInsets.only(
top: 10,
bottom: 10,
left: 5,
right: 5,
),
decoration: const BoxDecoration(
color: Color(0xff444654),
border: Border(
top: BorderSide(
color: Color(0xffd1d5db),
width: 0.5,
),
),
),
child: Row(
children: [
Expanded(
flex: 1,
child: Image.asset(
"images/avatar.png",
height: 40,
),
),
Expanded(
flex: 5,
child: TextFormField(
onFieldSubmitted: (e) {
},
controller: chatcontroller,
style: const TextStyle(
color: Colors.white,
),
decoration: const InputDecoration(
focusColor: Colors.white,
filled: true,
fillColor: Color(0xff343541),
suffixIcon: Icon(
Icons.send,
color: Color(0xffacacbe),
),
focusedBorder: OutlineInputBorder(
borderSide: BorderSide.none,
borderRadius: BorderRadius.all(
Radius.circular(5.0),
),
),
border: OutlineInputBorder(
borderRadius: BorderRadius.all(
Radius.circular(5.0),
),
),
),
),
),
],
),
),
);
}
}
The UserInput accepts one parameter which is the chatcontroller, also we have onFieldSubmitted callback method which we will use when the user submits their message.
User Message Widget
class UserMessage extends StatelessWidget {
final String text;
const UserMessage({
Key? key,
required this.text,
}) : super(key: key);
@override
Widget build(BuildContext context) {
return Container(
padding: const EdgeInsets.all(8),
child: Row(
mainAxisAlignment: MainAxisAlignment.start,
crossAxisAlignment: CrossAxisAlignment.start,
children: [
Expanded(
flex: 1,
child: Padding(
padding: const EdgeInsets.all(8.0),
child: Image.asset(
"images/avatar.png",
height: 40,
width: 40,
fit: BoxFit.contain,
),
),
),
Expanded(
flex: 5,
child: Padding(
padding: const EdgeInsets.only(
left: 3,
top: 8,
),
child: Text(
text,
style: const TextStyle(
color: Color(0xffd1d5db),
fontSize: 16,
fontWeight: FontWeight.w700,
),
),
),
),
],
),
);
}
}
The user message pass the user message as a parameter to the Usermessage class which will be appended to the ListView
AI Message Widget
class AiMessage extends StatelessWidget {
final String text;
const AiMessage({
Key? key,
required this.text,
}) : super(key: key);
@override
Widget build(BuildContext context) {
return Container(
color: const Color(0xff444654),
padding: const EdgeInsets.all(8),
child: Row(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
Expanded(
flex: 1,
child: Padding(
padding: const EdgeInsets.all(8.0),
child: Container(
color: const Color(0xff0fa37f),
padding: const EdgeInsets.all(3),
child: SvgPicture.asset(
"images/ai-avatar.svg",
height: 30,
width: 30,
fit: BoxFit.contain,
),
),
),
),
Expanded(
flex: 5,
child: AnimatedTextKit(
animatedTexts: [
TypewriterAnimatedText(
text,
textStyle: const TextStyle(
color: Color(0xffd1d5db),
fontSize: 16,
fontWeight: FontWeight.w700,
),
),
],
totalRepeatCount: 1,
),
),
],
),
);
}
}
The AI message pass the user message as a parameter to the AiMessage class which will be appended to the ListView.
Using the AnimatedTextKit package we can animate our text using the typewriter animation.
Loader Widget
class Loading extends StatelessWidget {
final String text;
const Loading({
Key? key,
required this.text,
}) : super(key: key);
@override
Widget build(BuildContext context) {
return Container(
color: const Color(0xff444654),
padding: const EdgeInsets.all(8),
child: Row(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
Expanded(
flex: 1,
child: Padding(
padding: const EdgeInsets.all(8.0),
child: Container(
color: const Color(0xff0fa37f),
padding: const EdgeInsets.all(3),
child: SvgPicture.asset(
"images/ai-avatar.svg",
height: 30,
width: 30,
fit: BoxFit.contain,
),
),
),
),
Expanded(
flex: 5,
child: Text(
text,
style: const TextStyle(
color: Color(0xffd1d5db),
fontSize: 16,
fontWeight: FontWeight.w700,
),
),
),
],
),
);
}
}
The Loader Widget is used to await a response from the API call, then the response is completed we remove the loader from the list.
APP Constant
const endpoint = "https://api.openai.com/v1/";
const aiToken = "sk-------------------------------------";
Create a file called api_constants.dart this will contain our endpoint and API token, you can get your API token from API TOKEN DASHBOARD
OpenAI Repository
class OpenAiRepository {
static var client = http.Client();
static Future<Map<String, dynamic>> sendMessage({required prompt}) async {
try {
var headers = {
'Authorization': 'Bearer $aiToken',
'Content-Type': 'application/json'
};
var request = http.Request('POST', Uri.parse('${endpoint}completions'));
request.body = json.encode({
"model": "text-davinci-003",
"prompt": prompt,
"temperature": 0,
"max_tokens": 2000
});
request.headers.addAll(headers);
http.StreamedResponse response = await request.send();
if (response.statusCode == 200) {
final data = await response.stream.bytesToString();
return json.decode(data);
} else {
return {
"status": false,
"message": "Oops, there was an error",
};
}
} catch (_) {
return {
"status": false,
"message": "Oops, there was an error",
};
}
}
}
Now, let’s communicate with Open AI API. We have to create a file called openai_repository.dart in the repository folder. In the file, we have a class called OpenAIRepository which has a static method called sendMessage which accepts just a single parameter prompt
Authentication
The OpenAI API uses API keys for authentication. Visit your API Keys page to retrieve the API key you’ll use in your requests if you have not done that. All you have to do is use the constant.
All API requests should include your API key in an Authorization
HTTP header as follows:
Authorization: Bearer YOUR_API_KEY
Making Request
{
"model": "text-davinci-003",
"prompt": prompt,
"temperature": 0,
"max_tokens": 2000
}
This request queries the Davinci model to complete the text starting with a prompt you sent from your user input. The max_tokens
parameter sets an upper bound on how many tokens the API will return. The temperature
means the model will take more risks. Try 0.9 for more creative applications, and 0 for ones with a well-defined answer.
This will return a Map<String, dynamic> response that looks like this.
{
"id": "cmpl-GERzeJQ4lvqPk8SkZu4XMIuR",
"object": "text_completion",
"created": 1586839808,
"model": "text-davinci:003",
"choices": [
{
"text": "\n\nThis is indeed a test",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 7,
"total_tokens": 12
}
}
ChatModel
class ChatModel extends ChangeNotifier {
List<Widget> messages = [];
List<Widget> get getMessages => messages;
Future<void> sendChat(String txt) async {
addUserMessage(txt);
Map<String, dynamic> response =
await OpenAiRepository.sendMessage(prompt: txt);
String text = response['choices'][0]['text'];
//remove the last item
messages.removeLast();
messages.add(AiMessage(text: text));
notifyListeners();
}
void addUserMessage(txt) {
messages.add(UserMessage(text: txt));
messages.add(const Loading(text: "..."));
notifyListeners();
}
}
Since we are using provider as our State management, we create a class called ChatModel which extends the ChangeNotifier. We make an empty List<Widget> which we will use to push in new messages (Widget). A getter getMessages to get messages,
We create a method called sendChat which takes the user input and then calls the addUserMessage which pushes a new widget containing the user message and also the loader widget to the messages list.
Next, we send the prompt message to the OpenAI Repository which then sends back a response. We then store the text into a variable String called text.
Next, we remove the Loader Widget from the List and add the AIMessage Widget
Almost done… 🤞🏽
We have to go back to our userInput Widget and call sendChat when the user tries to submit his message. Your code will look much like this now.
TextFormField(
onFieldSubmitted: (e) {
context.read<ChatModel>().sendChat(e);
chatcontroller.clear();
},
Hit it🚀
All have to do now, is to edit our main.dart file. Wrap the body in MultiProvider and your code will look something like this.
body: MultiProvider(
providers: [
ChangeNotifierProvider(create: (_) => ChatModel()),
],
child: Consumer<ChatModel>(builder: (context, model, child) {
List<Widget> messages = model.getMessages;
return Stack(
children: [
//chat
Container(
margin: const EdgeInsets.only(bottom: 80),
child: ListView(
children: [
const Divider(
color: Color(0xffd1d5db),
),
for (int i = 0; i < messages.length; i++) messages[i]
],
),
),
//input
UserInput(
chatcontroller: chatcontroller,
)
],
);
}),
),
App Running 🛸🚁
All done, you can start using the ChatGPT on your Flutter App. You can also clone the repo right here 👉🏾 https://github.com/bensonarafat/openai-chat Have any questions, drop your comment here and I will response as soon as possible.