The following is a recording and full transcript from the webinar, “Migrating Existing Applications to AWS Without Reengineering”. You can download the full slide deck on Slideshare.
Designing a cloud data system architecture that protects your precious data when operating business-critical applications and workloads in the cloud is of paramount importance to cloud architects today.
Ensuring the high availability for your company’s applications and protecting business data is challenging and somewhat different than in traditional on-premise data centers. For most companies with hundreds to thousands of applications, it’s impractical to build all of these important capabilities into every application’s design architecture.
The cloud storage infrastructure typically only provides a subset of what’s required to properly protect business data and applications. So how do you ensure your business data and applications are architected correctly and protected in the cloud? In this webinar, we covered: Best Practices for protecting business data in the cloud How To design a protected and highly-available cloud system architecture, Lessons Learned from architecting thousands of cloud system architectures
Full Transcript: Migrating Existing Applications to AWS Without Reengineering
Taran Soodan: Good afternoon everyone, and welcome to a SoftNAS webinar, today, on No App left behind – Migrate Your Existing Applications to AWS Without Re-engineering.
Definitely a wordy title, but today’s webinar is going to focus on migrating the existing applications that you have on your on-premises systems to AWS without having to re-write a single line or code.
Our presenter today will be myself, Taran Soodan, and our senior solutions architect, Kevin Brown. Kevin, go ahead and say hey.
Kevin Brown: Hello! How are you doing? I’m looking forward to presenting today.
Taran: Thank you, Kevin. Before we begin this webinar, we do want to cover a couple of housekeeping items. Number one, the webinar audio is available to both your mic and speakers on either your desktop or your laptop or you can use your telephone as well. The local numbers are available to you on the “Go to” mini-control panel.
Kevin, if you just go back. We will also be dedicating some portion of time at the end to answering any questions that you may have during today’s webinar in the questions pane.
If you have any questions that pop up during the webinar, please feel free to post them in the questions pane here and we’ll go and answer them at the end.
Finally, this webinar is being recorded. For those of you who want to share the slides or the audio with your colleagues, you’ll receive a link later on today. The next slide please, Kevin.
To thank everyone for joining today’s webinar, we are offering a $100 AWS credit and we’ll provide the link for that credit a little bit lateron.
For our agenda today, this is going to be a technical discussing. We’ll be talking about security and data concerns for migrating for migrating your applications.
We will also talk a bit about the design and architectural considerations around security and access control, performance, backup and data protection, mobility and elasticity, and high-availability.
A spoiler alert – we are going to talk about our SoftNAS product and how it’s able to help you migrate without having to re-engineer your applications.
Finally, we will close off this webinar with the Q&A session. Kevin, I’ll go ahead and turn it over to you.
Kevin: All right, not a problem. Thank you very much, Taran. Hello. My name is Kevin Brown, and I thank you for joining us today from wherever you’ve logged in. Our goal for this webinar is to discuss the best practices for designing cloud storage for existing apps on AWS.
All right, not a problem. Thank you very much, Taran. Hello. My name is Kevin Brown, and I thank you for joining us today from wherever you’ve logged in. Our goal for this webinar is to discuss the best practices for designing cloud storage for existing apps on AWS.
We’ve collected a series of questions that we get asked regularly as we work with our customers and potential customers as they go through the decision of migrating their existing environments to the cloud. We hope that you’ll find this informative as you go through your decision-making and your design processes.
First, let’s actually talk about SoftNAS a little bit. We’re just going to give our story and why do we have this information and why do we want to share this information.
SoftNAS was born in the cloud in 2012. It was actually born from…Our founder, Rick Brady went out to find a solution that would be able to give him access to cloud storage.
When he went out and looked for one, he could find anything that was out there. He took it upon himself to go through the process of creating this solution that we now know as SoftNAS.
SoftNAS is a cloud NAS. It’s a virtual storage appliance that exists on-premise or within the AWS Cloud and we have over 2,000 VPC deployments. We focus on no app left behind.
We give you the ability to migrate your apps into the cloud so that you don’t have to change your code at all. It’s a Lift and Shift bringing your applications to be able to address cloud storage very easily and very quickly.
We work with Fortune500 to SMB companies and we have thousands of AWS subscribers. SoftNAS also owns several patents including patents for HA in the cloud and data migration.
The first thing that we want to go through and we want to talk about is cloud strategy. Cloud strategy, what hinders it? What questions do we need to ask? What are we thinking about as we go through the process of moving our data to the cloud?
Every year, IDG, the number one tech media company in the world. You might know them for creating CIO.com, Computer World, Info World, IT World, Network World. Basically, if it has technology and a world next to it, it’s probably owned by IDG.
IDG calls its customers every single year with the goal of measuring cloud computing trends among technology decision-makers to figure out uses and plans across various cloud services, development models, investments, and trying to figure out business strategies and plans for the rest of the IT world to focus on.
From that, we actually took the top five concerns. The top five concerns belief it or not has to do around our data. Number one concerns come across. Concerns regarding where will my data be stored?
Is my data going to be stored safely and reliably? Is it going to be stored in a data center? What type of storage is it going to be stored on? How could I figure that out?
Especially when you’re thinking about moving to a public cloud, these are some questions that rake people’s minds.
Questions number. Concerns about the security of cloud computing solutions. Is there a risk of unauthorized access, data integrity protection? These are all concerns that are out there about the security of what I’m going to have in an environment.
We also have concerns that are out there about vendor lock-in. What happens if my vendor of choice changes costs, changes offerings, or just goes?
There are concerns out there, number four, with surrounding integration of existing infrastructure and policies. How do I make the information available outside the cloud while preserving the uniform set of access privileges that I have worked the last 20 years to develop?
Number five. You have the concerns about the ability to maintain enterprise or industry standards — that’s your ISOs, PSI, SaaC, SaaS. What we plan to do in the upcoming slide is to share some of the questions that our customers have asked and which they have asked earlier in the design structure of moving their data to the cloud so that this would be beneficial to you.
Question number one and it’s based of off the same IDG poll. How do I maintain or extend security and access control into the cloud? We often think outside to end when we design for threats.
This is how our current on-prem environment is built. It was built with protection from external access. It then goes down to physical access to the infrastructure. Then it’s access to the files and directories. All of these protections need to be considered and extended to your cloud environment, so that’s where the setup of AWS’s infrastructure plays hand-in-hand with SoftNAS.
First it’s the fact of setting up a firewall and ensuring Amazon already has the ability for you to be able to utilize the firewalls through your access controls to your security groups.
Setting up stateful protection from your VPCs, setting up network access protection, and then you go into cloud native security. Access to infrastructure. Who has access to your infrastructure?
By you setting up your IAM roles or your IAM users, you have the ability to control that from security groups. Being able to encrypt that data. If everything fails and users have the ability to touch my data, how do I make sure that even when they can see it, it’s encrypted and it’s something that they cannot use.
We also talk about the extension of the enterprise authentication schemes. How do I make sure that I am tiering into active directory or tiering into LDAP which is already existing in my environment?
The next question that we want to ask and this is structured around backups and data protection. How to safeguard from data loss or corruption. We get asked this question almost — I don’t know — probably about 10-15 times a day.
We get asked, I’m moving everything to the cloud; do I still need a backup? Yes, you still do need a backup. An effective backup strategy is still needed. Even though your data is in the cloud, you still have redundancy.
Everything has been enhanced, but you still need to be able to have a point in time or extended period of time that you could actually back and grab that data from.
Do I still need antivirus and malware protection? Yes, antivirus and malware protection is still needed. You also need the ability to have snapshots and rollbacks and that’s one of the things that you want to design for as you decide to move your storage to the cloud.
How do I protect against user-error or compromise? We live in a world of compromise. A couple of weeks ago, we saw all in Europe that it had run through the problem of ransomware. Ransomware was basically hitting companies left and right.
From a snapshot and rollback standpoint, you want to be able to have a point in time that you could quickly rollback so that your users will not experience a long period of downtime. You need to design your storage with that in mind.
We also need to talk about replication. Where am I going to store my data? Where am I going to replicate my data to? How can I protect so that data redundancy that I am used to on my on-prem environment, that I have the ability to bring some of that to the cloud and have data resiliency?
I also need to think about how do I protect from data corruption? How do I design my environment to ensure that data integrity is going to be there, that I am making that the protocols with the underlying infrastructure that I am protecting myself from being corrupt from different scenarios that would cause my data to lose its integrity?
Also, you want to think of data efficiency. How can I minimize the amount of data while designing for costs-savings? Am I thinking in this scenario about how do I dedupe, how do I compress that data?
All of these things need to be taken into account as you go through that design process because it’s easier to think about it and ask those questions now than to try to shoehorn or fit it in after you’ve moved your data to the cloud.
The next thing that we need to think about is performance. We get asked this all the time. How do I plan for performance? How do I design for performance but not just for now? how do I design for performance in the future also?
Yes, we could design a solution that is angelic for our current situation; but if it doesn’t scale with us for the next five years, for the next two years, it’s not beneficial – it gets archaic very quickly.
How do I structure it so that I am designing not just for now but for two years from now, for five years from now, for potentially ten years from now? There are different concerns that need to be taken at this point.
We need to talk about dedicated versus shared infrastructure. Dedicated instances provide the ability for you to tune performance. That’s a big thing because what you do right now and your use-case as it changes, you need to make sure that you could actually tune for performance for that.
Shared infrastructure. Although shared-infrastructure can be cost-effective, multi-tenancy means that tuning is programmatic. If I go ahead and I use a shared-infrastructure, that means that if I have to tune for my specific use-case or multiple use-cases within that environment, I have to wait for a programmatic change because it’s not dedicated to me solely. It is used by many other users.
Those are different concerns that you need to think about when it actually comes to the point of, am I going to use dedicated or am I going to use a shared-infrastructure.
We also need to think about bursting and bursting limitations. You always design with the ability to burst beyond peak load. That is number one. 101, you have to think about the fact, I know that my regular load is going to be X but I need to make sure that I could burst beyond X.
You need to also understand the pros and cons of burst credit. There’re different infrastructures and different solutions that are out there that introduce burst credits.
If burst credits are used, what do I have the ability to burst to? Once that burst credit is done, what does it do? Does it bring me down to subpar or does it bring me back to par?
These are questions that I need to ask, or you need to ask as you go into the process of designing for storage and what the look and feel of your storage is going to be while you’re moving to the public cloud.
You also need to look and consider predictable performance levels. If I am running any production, I need to know that I have a baseline. That baseline should not fluctuate as much as I have the ability to burst beyond my baseline and be able to use more when I need to use more.
I need to know that when I am at the base, I am secure with the fact that my baseline predictable is predictable and my performance levels are going to be just that something that I don’t have to worry about.
We need to be able to go ahead and you should already be thinking about using benchmark programs to validate the baseline performance of your system.
There’re tons of freeware tools out there, but that’s something that you definitely need to do while you’re going through that development process or design process for performance within the environment.
Storage, throughput, and IOPS. What storage tier is best suited for my use case? In every environment, you’re going to have multiple use cases. Do I have the ability to design my application or the storage that’s going to support my application to be best-suited for my use-case?
From a performance standpoint, you have multiple apps 19:04 for your storage tiers. You could go with your GP2 – these are the general-purpose SSD drives. There’s Provisioned IOPS. There’s Throughput Optimize. There is Cold HDDs. All of these are options that you need to make a determination on.
A lot of times, you’ll think about the fact that “I need general-purpose IOPS for this application, but Throughput Optimize works well with this application.” Do I have the ability? What’s my elasticity to use both? What’s the thought behind doing it?
AWS gives you the ability to address multiple storage. The question that you need to ask yourself is, based on my use case, what is most important to my workload? Is it IOPS? Is it Throughput?
Based on the answer to that question, it gives you the larger idea of what storage to choose. This slide breaks down a little bit of moving from anything greater than 65,000 IOPS. Are you positioned to use anything that you need higher throughput from? What type of storage should you actually focus on?
These are definite things that we actually work with our customers on a regular basis to actually steer them to the right choice so that it’s cost-effective and it is also user-effective to their applications.
Then S3. A cloud NAS should have the ability to address object storage because there’s different use cases within your environment that would benefit from being able to use object storage.
We were at a meetup a couple of weeks ago that we did with AWS and there was an ask from the crowd. The ask came across. They said, “If I have a need to be able to store multiple or hundreds of petabytes of data, what should I use? I need to be able to access those files.”
The answer back was, you could definitely use S3, but you’ll need to be able to create the API to be able to use S3 correctly. With a cloud NAS, you should have the ability to use object storage without having to utilize the API.
How do you actually get to the point that you’re using APIs already built into the software to be able to use S3 storage or object storage the way that you would use any other file system?
Mobility and elasticity. What are my storage requirements and limitations? You would think that this would be the first questions that get asked as we go through this design process.
As companies come to us and we discuss it, but a lot of times, it’s not. It’s, people are not aware of their capacity limitations, so they make a decision to use a certain platform or to use a certain type of storage and they are unaware of the maximums. What’s my total growth?
What is the largest size that this particular medium will support? What’s the largest file size? What’s the largest folder size? What’s the largest block size? These are all questions that need to be considered as you go through the process of designing your storage.
You also need to think about platform support. What other platforms can I quickly migrate to if needs be? What protocols can I utilize?
From a tiered storage support and migration, if I start off with Provisioned IOPS disks, am I stuck with Provisioned IOPS disks? If I realize that my utilization is that of the need of S3, is there any way for me to migrate from Provisioned IOPS storage to S3 storage?
We need to think about that as we’re going with designing storage in the backend. How quickly can we make that data mobile? How quickly can we make it something that could be sitting on a different tier of storage, in this case, from object to block or vice versa, from block back to object?
And automation. Thinking about automation, what can I quickly do? What can I script? Is this something that I could spin up using cloud formation? Is there any tools? Is there API associated with it, CLI? What can I do to be able to make a lot of the work that I regularly do quick, easy scriptable?
We get asked this question also a lot. What strategy should I choose to get the data or application to the cloud? There are two strategies that are out there right now.
There is the application modernization strategy which comes with its pros and cons. Then there is also the Lift and Shift strategy which comes with its own pros and cons.
What’s associated with that modernization is the fact that the pros, you build a new application. You can modify and delete and update existing applications to take advantage of cloud-native services. It’s definitely a huge pro. It’s born in the cloud. It’s changed in the cloud. You have access to it.
However, the cons that are associated with that is that there is a slower time to production. More than likely, it’s going to have significant downtime as you try to migrate that data over. The timeline we’re looking at is months to years.
Then there are also the costs associated with that. It’s ensuring that it’s built correctly, tested correctly, and then implemented correctly.
Then there is the Lift and Shift. The pros that you have for the Lift and Shift is that there is a faster time to cloud production. Instead of months to years, we’re looking at the ability to do this in days to weeks or, depending on how aggressive you could be, it could be hours to weeks. It totally depends.
Then there’s costs associated with it. You’re not recreating that app. You’re not going back and rewriting code that is only going to be beneficial for your move. You’re having your developers write features and new things to your code that’s actually going to benefit and point you in a way of making sure that it’s actually continuously going to support your customers themselves.
The cons associated with the Lift and Shift approach is that your application is not API optimized, but that’s something that you can make a decision on whether or not that that’s something that’s beneficial or needed for your app. Does your app need to speak the API or does it just need to address the storage?
The next thing that we want to discuss, we want to discuss high availability. This is key in any design process. It’s how do I make sure or plan for failure or disruption of services? Anything happens; anything goes wrong, how do I cover myself to make sure that my users don’t feel the pain? My failover needs to be non-disruptive.
How can I make sure that if a reading fails, if an instance fails, that my users come back and my users don’t even notice that a problem happened? I need to design for that.
How do I ensure that during this process that I am maintaining my existing connections? It’s not the fact that failover happens and then I need to go back and recreate all my tiers, repoint my pointers to different APIs to different locations.
How do I make sure that I have the ability to maintain my existing connections within a consistent IPO? How do I make sure that I have and what I’ve designed fulfills my IPO needs?
Another thing that comes up and this is one of the questions that generally comesin to our team is, is HA per app or is this HA for infrastructure? When you go ahead and you go through the process of app modernization, you’re actually doing HA per app.
When you are looking at a more holistic solution, you need to think in advance. On your on-premise environment, you’re doing HA for infrastructure. How do you migrate that HA for infrastructure over to your cloud environment? And that’s where the cloud NAS comes in.
The cloud NAS solves many of the security and design concerns. We have a couple of quotes up there. They are listed on our website for some of our users on the AWS platform.
We have Keith Son which we did a webinar with a couple of weeks ago. It might have been last week. I don’t remember if it’s up in my mind. Keith Son loves that software, constantly coming to us asking for different ways and multiple ways that he could actually tweak or use our software more.
Keith says, “Selecting SoftNAS has enabled us to quickly provision and manage storage, which has contributed to a hassle-free experience.” That’s what you want to hear when you come to design. It’s hassle-free. I don’t have to worry about it.
We also have a quote there from John Crites from Modus. John says that he’s found that SoftNAS is cost-effective and a secure solution for storing massive amounts of data in the AWS environment.
Cloud NAS addresses the security and access concerns. You need to be able to tier into active directory. You need to be able to tier into LDAP.
You need to be able to secure your environment using IAM rules. Why? Because I don’t want my security, my secret keys to be visible by anybody. I want to be able to utilize the security that’s already initiated by AWS and have it just [be 33:00] through my app.
VPC security groups. We ensure that with your VPC and with the security groups that you set up, only users that you want to have access to your infrastructure has access to your infrastructure.
From a data protection standpoint, block replication. How do we make sure that my data is somewhere else?
Data redundancy. We’ve been preaching that for the last 20 years. The only way I can make sure that my data is fully protected is that it’s redundant. In the cloud, although we’re extended redundancy, how do I make sure that my data is faultlessly redundant?
We’re talking about bock replication. For data protection, we’re talking about encryption. How could you actually encrypt that data to make sure that even if someone did get access to it they didn’t know what they would do with it. It would be gobbledygook.
You need to be able to have the ability to do instant snapshots. How can I go in, based on my scenario, go in and create an instant snapshot of my environment? So worst case scenario if anything happens, I have a point in time that I could quickly come back to.
Write up with snap clones. How do I stand up my data quickly? Worst case scenario if anything happens and I need to be able to revert to a period before that I know I wasn’t compromised; how can I do that quickly?
High availability in ZFS and Linux. How do I protect my infrastructure underneath? Then performance. A cloud NAS gives you dedicated infrastructure. That means that it gives you the ability to be tunable.
If my workload or use-case increases, I have the ability to tune my appliance to be able to grow as my use-case grows or as my use-case needs growth.
Performance and adaptability. From disk to SSD, to networking, how can I make my environment or my storage adaptable to the performance that I would need? From no-burst limitations, dedicated throughput, compression, deduplication are all things that we need to be considering as we go through this design process. The cloud NAS gives you the ability to do it.
Then flexibility. What can I grow to? Can I scale from gigabyte to petabyte with the way that I have designed? Can I grow from petabytes to multiple petabytes? How do I make sure that I’ve designed with the thought of flexibility? The cloud NAS gives you the ability to do that.
We are also talking about multiplatform, multi-cloud, block and object storage. Have I designed my environment that I could switch to new storage options? Cloud NAS gives you the ability to do that.
We also need to get to the point of the protocols. What protocols are supported, CIFS, NFS, iSCSI? Can I then provision these systems? Yes. The cloud NAS gives you the ability to do that.
With that, I just wanted to give a very quick overview onSoftNAS and what we do from a SoftNAS perspective. SoftNAS, as we said, is a cloud NAS. It’s the most downloadable and the most utilized cloud NAS in the cloud.
We give you the ability to easily migrate those applications to AWS. You don’t need to change your applications at all. As long as your applications connect via CIFS or NFS or iSCSI or AFP, we are agnostic. Your applications would connect exactly the same way that they had connected as they do on-premise. We give you the ability to address cloud storage and this is in the form of S3. It’s in the form of Provisioned IOPS. It’s in the form of gp2s.
Anything that Amazon has available as storage, SoftNAS gives you the ability to aggregate into a storage pool and enhance sharing it out via volumes that are CIFS, NFS, or iSCSI. Giving you the ability to have your applications move seamlessly.
These are some of our technology partners. We work hand-in-hand with Talon, ScanDisk, NetApp, SwiftStack. All of these companies love our software and we work hand-in-hand as we deliver solutions together.
We talked about some of the functions that we have that are native to SoftNAS. The Cloud-Native IAM Role Integration, we have the ability to do that – to encrypt your data at rest or in transit.
Then also the fact of firewall security, we have the ability to be able to utilize that to. From a data protection standpoint, it’s a copy on write file system so it gives you the ability to be able to ensure the data integrity of your information of your storage.
We’re talking about instant storage snapshots whether manual or scheduled and rapid snapshot rollback there, we support RAID all across the board with all EBS and also the ability to do it with S3 back storage.
We also give you the ability to…From a built-in snapshot scenario for your end-users, this is one of the things that our customers love, the Windows previous version support.
IT teams love this; because if they have a user that made a mistake, instead of them having to go back in and recover a whole volume to be able to get the data back, they just tell them to go ahead.
Go back in. Windows previous versions, right click on that, restore that previous version. Giving your users something that they are used to on-premise that they have immediately within the cloud.
High performance. Scaling up to gigabytes per second for throughput. For performance, we talked about no burst limits protects against Split-brain on HA fallover giving you the ability to migrate those applications without writing or rewriting a single piece of code.
We talked about automation in the ability to utilize our REST APIs. Very robust REST API cloud integration using ARM or cloud formation template available in every AWS ARM region.
Brands you know that trust SoftNAS. With the list of logos that we have on the screen, everywhere from your Fortune 500 all the way down to SMBs, they are using us in multiple different use cases within the environment – all the way from production, to DevOps, to UAT, enabling their end-users or development teams or production environments to be able to scale quickly and utilize the redundancy that we have, from a SoftNAS standpoint.
You could try SoftNAS for free for 30 days. Very quickly, very easily, our software stands up within 30 minutes and that’s all the way from deploying the instance to creating a tier.
Creating your volumes, creating your disks, aggregating those disks into a storage pool, creating your volumes, and setting up your tiers – 30 minutes. Very quickly, very easy, and you could actually try it for 30 days. Figure out how it fits in your environment. Test it, test the HA, test the ability to use a virtual IP between two systems – very quickly, very easy, very simple.
Taran: Okay, Kevin, let me give some Q&R really quick. To thank everyone for joining today’s webinar, we are giving out $100 AWS credits for free. All we need is for you to click on the link that I just posted in the chat window.
Just go in, click on that link, and it will take you to a page where all you have todo is provide us your email address; and then within 24 hours, you will receive a free $100 a month AWS credit.
For those of you who are interested in learning more about SoftNAS on AWS, we welcome you to go visit softnas.com/aws or the Bitly link that’s right here on the screen.
Just go ahead and visit that link to learn more about how SoftNAS works on AWS. If you have any questions or you want to get in contact with us, please feel free to visit our “Contact us” page. Basically, you can submit a request to learn more about how SoftNAS works.
Kevin and our other SAs are more than happy to work through your use case to learn about what you may be using cloud storage for and how we can help you out with that.
Then also, we do have a live chat function on our website as well so if you want to speak to one of our people immediately, you can just go ahead and use that live chat feature and our team will answer your questions pretty quickly.
We’ll go ahead and start the Q&A now. We have a couple of questions here and let’s go ahead and knock them out. It should be about five minutes. The first question that we have here is can we download the slide deck?
You absolutely can download the slide deck. We’re going to upload it to SlideShare shortly after this webinar is over. Later on in the afternoon, we are going to send you an email with a SlideShare link and a YouTube link for the recording of the webinar.
The second question that we have here is can SoftNAS be used in a hybrid deployment? Kevin?
Kevin: SoftNAS can be used in a hybrid deployment. The same code that exists within the AWS environment also exists that you can actually deploy it on VMware. Each SoftNAS device gives you the ability to address cloud storage so you would be able to go ahead and utilize your regular access within AWS so you could still go ahead and utilize EBS or S3 storage at the backend.
Taran: Fantastic. Thank you, Kevin. The next question that we have here is if I do choose to migrate my applications to AWS, can I do it natively without SoftNAS or do I have to migrate with SoftNAS?
Kevin: That is an interesting question. It depends. It depends on how much storage you actually have behind that application. If you’re looking at something that is a one server solution and you’re not concerned directly with HA for that environment, then yes. You could definitely take that one server, bring it up to the cloud and you’d be able to do that.
However, if you’re looking to be able to recreate an enterprise like solution within your environment, then it would make sense to consider some type of NAS like solution to able to have that redundancy in your data taken care of.
Taran: Great. Thanks, Kevin. The next question that we have here is, any integration with OpenStack for hybrid cloud, VIO VMware OpenStack?
Kevin: You guys, I don’t know who’s asking that question, but we do have the ability to integrate with OpenStack. We would love to talk to you about your use case. If you could actually reach out to our sales team, we’ll go ahead and schedule a meeting with an SA so we could talk through that.
Taran: Awesome. Thank you, Kevin. That was from Paulo, by the way.
Kevin: OK, thanks a lot.
Taran: That’s all the questions that we had for today’s webinar. Before we end this webinar, we do want to let everyone know that there is a survey available after this webinar is over. If you could please fill out that survey just so we can get some feedback on today’s webinar and let us know what topics we should definitely work on for our future webinars.
With that, our webinar today is over. Kevin, thanks again for presenting. We want to thank all of you for attending as well. We look forward to seeing you at our future webinars. Everyone have a great rest of the day.