ウェビナー・オンデマンド
カスタマーストーリー|NASA JPLのESMツール評価・選定プロセス
Join us for an insightful informal recap of NASA’s Jet Propulsion Laboratory (JPL)’s journey in selecting a software utilization reporting vendor. In this presentation, Frank Dowens, Enterprise Applications Software Engineer at JPL, walks through the structured evaluation process that led JPL to choose Open iT as their Enterprise Software Management (ESM) tool.
- Evaluation process: Explore the detailed steps and criteria JPL used to assess ESM solutions and why Open iT was selected
- Rigorous assessment: Understand the testing and validation process behind JPL’s decision-making
- Key challenges: Learn about the challenges faced during evaluation and how Open iT met strict requirements
- Selection insights: Gain practical takeaways for choosing the right ESM solution for your organization
2019年1月29日
30
mins
TRANSCRIPT
[0:05] Lynn: Frank is an information technology professional for NASA’s Jet Propulsion Laboratory where he performs business process design web programming database design and development SQL programming system engineering requirements gathering and IT projects his career started about thirty years ago as a mainframe computer operator from there he advanced a technical project manager where he built interlibrary loan databases a few years later he moved to JPL as an information technology lead so before I get things off track here I’m gonna turn the mic over to Frank go ahead.
[0:51] Frank: Well thank you very much and welcome to everybody who’s here for the webinar I’m really excited to be presenting this information to you I hope that you find it interesting as much as I do and let’s go ahead and get started right away okay so my presentation is going to be an informal recap of the process that we went through to seek out find and evaluate utilization products and I’m going to give you the presentation as it happened when we did it ourselves I’m going to be showing you how I conducted the study for the evaluation in the order that I did it and I’m going to show you some of the artifacts that I created as part of the production of the final report so I need to let you know that JPL has a policy that we don’t endorse specific products and so as part of that when I’m presenting information on the vendors that we evaluated I will not be using those vendors names so let’s talk about the agenda for today’s webinar I’m going to be telling you about the goals that we had when we were initially starting the study to find a utilization product I’m going to be talking to you about the vendors that we selected and talked to you about what we learned about those vendors when we conducted the study I’m going to talk to you about how we did the scoring and that relates to how we did the actual evaluation of those vendors that we selected I’m going to show you how we presented the results of our scores that will include how each of the vendors performed you know talk to you about the results of the study including where we are right now in the entire process and then at the end we’ll have some time for some questions I do need to let you know that the release process for information from JPL is strict and so I’ll need to limit the questions to questions related to the webinar that we’re doing right now so when I’m speaking about the products and the vendors in this study that we performed when I’m talking about a vendor I’m talking about the companies that provide utilization solutions and when I talk about a product I’m talking about the software that we manage maintain and deliver to our customers and therefore the products that you’re managing and maintaining and delivering to your customers in-house we do use those terms interchangeably so I’m going to do my best during the seminar to keep to those exact terms and if I make a mistake I’ll correct myself.
[3:58] Okay so let’s begin with the meat of the presentation let’s talk about the goals for the study so the first goal that we had was we need to increase the quality I’m sorry the quantity of our utilization data well we had found that in our in-house tool that we had created to do our utilization many of the products that we have produce logging data and we were using that logging data to gather the utilization to manage it and then to present it in our in-house tool to produce our graphing and we were finding that that use of logging data was really problematic the logs would sometimes go awry when the vendors didn’t flush the data from their buffer into the log and we found that we would go back and reprocess that data and so that created problems in our utilization reports that we had to regularly correct because of this our in-house stakeholders began to become less confident in the data because when we’d have to go back and reprocess they felt that the stability of our data collection system wasn’t as strong as it could be so that was also one of the reasons that we did this study to say how can we improve that quality and improve that confidence now we found that the data wasn’t quite as bad as the stakeholders had perceived that it was but that perception was definitely something that we needed to manage as well.
[5:36] As part of this process we learned that we wanted to increase our usability of our in-house tool we wanted to make our tool more of a service oriented architecture tool where the users relied less on myself and my other developers to program reports for them using the utilization data from the back end and that they could go to the tool and get more of those reports themselves so since we already had an in-house tool that was working for us as part of this study we wanted to make sure that that was the tool we wanted to use and continue. When we had initially built our in-house tool there wasn’t a sufficient tool in industry that we could use to get the utilization that we wanted and so we built our own and the most important part of this study was then to produce what we call a build versus buy report and that is a review of whether or not our in-house tool and our in-house resources would be a better solution or if something in industry that’s pre-existing and available now would be better to meet our study needs and our internal needs.
[7:15] Okay so we’re going to talk about a quick overview of everything we did to conduct the study and so where we started right at the beginning was to evaluate what our needs were in utilization reporting and we set out we gathered our requirements and what was important there is to remember that we didn’t gather the requirements based on our current in-house tool otherwise it would meet all requirements but rather we started from scratch from blank and said okay if we had everything we wanted what would that be what is it that we’re really looking for what are the needs that we are missing in our current tool that we need to address for our in-house reporting needs. I then went out and I searched the web and I looked and I was surprised to find that now there are quite a few vendors that supply utilization reporting and so we identified those vendors and we read their websites we read all their online documentation got a good idea about the vendors from just their public presence and from that we selected a few of them that we felt would be candidates for the type of reporting that we wanted we sent them request for information including our requirements and what was important about that is that when we finally met with each of the vendors and we spoke with their sales staff and their technical staff we had already sent our requirements to them and we had the initial meetings it was great because we were really focused on what our specific questions were we were able to get right down to individual questions and individual requirements and so the presentations with each of the vendors was a request from them to respond to all the requirements sets and to have them talk to us about how each of them met or did not meet those specific requirements sets.
[9:36] For those vendors that we decided we wanted to run pilot versions of those softwares because what we want to do is we wanted to see not just the demos that the sales staff and the tech staff was giving us we wanted to see what these products look like with our data and that also gave me the ability to run specific reports in each of the products so as part of that entire process we were then able to evaluate each of the vendors with our own data and to score them ourselves against our requirements and then the final portion of the study was the delivery over to our stakeholders of all of these evaluations and then the final results of our build versus buy study.
[10:39] As part of this also I did my system engineering process and that included the production of common system engineering requirements documents such as the requirements documents the context document the results of the studies and then I gathered these all together and I delivered them as one large report to my upper management and one thing I want to stress is that we found it incredibly valuable when we’re collecting our requirements to speak to all of the members of the team that we did we talk to our upper management we talked to the license management people those who are managing the changes of license files and the delivery of those services we talked to our product leads those are people who are specifically supporting individual products that we are delivering to our customer base and then we spoke to our discipline leads and for us the discipline is our group of stakeholders who are the management teams for our mechanical tools and the mechanical engineers and our electrical tools for the electrical engineers and our software systems tools so we gathered all of their information and you’ll find then like we did that everyone was looking for a slightly different piece of information an example might be that the managers were looking for more information on utilization over time how were our licenses being used do we have enough licenses the product support leads were more interested in how these denials am I getting and who were getting them and what were the denials caused by and so that drove our requirements inside the tool itself so I recommend that you get your requirement set from everyone who has a stake in what the utilization reports will tell them.
[12:56] So after we’ve collected our requirements we’ve begun to run the pilots the study revealed that for the product that we choose we need to make sure that it supported a significant number of products and this relates back to the problem we were having that our in-house tool didn’t support the license delivery software using the logs and we wanted any product that we have to replace our existing product if that’s what we chose to support more products we learned that any of the vendors that we selected we want to make sure they could import the non-supported product data and the importance there is we want to make sure that any of the products that we were providing to our customer base that we had a way to get utilization data into the vendors tool we couldn’t say that for a specific product that we were providing that there was no way to get utilization data into the vendors product unless the product that we are providing didn’t produce logging.
[14:32] We learned that we wanted a lot of personalized reports and that is represented by the fact that we have so many stakeholders who are looking at the data in so many different ways we need to make sure that the tool was robust enough to meet all of our stakeholders needs we learned that our stakeholders didn’t want to always go to the tool to log in and to run a report they wanted that data to come to them so we wanted for them to come in the morning and have their scheduled report delivered to them via email and they could be a daily report some of them have weekly reports that they want so that was one of our important needs.
[15:24] We also wanted to make sure that the reports that were generated from the vendors tool included our human resources information and what was important about that is we found that the reports that we wanted to generate included looking at the data from the view of what for example what groups in our human resources and our department structure how are they utilizing tools specifically and the utilization data as it comes in raw doesn’t include that information so we need to make sure that the vendors tool would marry that information together so we can then produce queries and reports from the institutional point of view.
[16:13] We want to make sure that we had really good denials reporting and the denial reporting needed to be focused so that as you know when you look at denials depending on the product the way it records its denials sometimes a product can record denials before it moves on to a release of a license and we want to make sure the denials reporting within the vendors product could handle the different kinds of denials that happen and one of our stakeholders was really interested in getting real-time information that is what’s happening right now on the system who’s logged in now and then marry that information with the HR information what group or section do they belong to.
[17:18] Okay so at this point in the study we’ve gathered our requirements we’ve spoken to the vendors we’ve selected vendors for pilots we have our own data in the system it’s revealed an initial set of information of what we are going to need in a utilization tool and so I want to talk to you real quick about some of the things we learned about the vendors that were running the pilots so vendor A had a very clean modern interface so I’m not gonna go over each one of these but the ones that are important was they had a clean modern interface this product didn’t allow us to run an on-site pilot but they did have a well-designed online interface where there was lots of data that we could run reports on the thing about vendor A that was important to us that we learned is the final bullet that their presentation of data was from a server point of view and what was important about that is for our triad systems you couldn’t run a report from all three triads simultaneously so there was a limit in that report perspective this vendor also had a problem with the ability to filter their denials.
[18:58] And so that was problematic. Our vendor B these are what we learned and what was important about them so vendor B’s product was very lean very clean there wasn’t a lot of JavaScript the images that it produced were very simple but it did give everything that we needed everything was there and compared to vendor A they had the ability to filter the denials we could limit the reporting there however what was interesting about vendor B is because they were lean that led us to have to evaluate whether or not vendor B was going to be able to meet the study requirement of being able to increase the quantity of our reporting.
[20:01] This is what we learned about vendor C they had a really well-designed user interface what was really important to us about vendor C was the locally stored LDAP data what was important about that is that that then released some of the resources against our institutional LDAP system so the vendor wasn’t constantly hitting our LDAP system in order to gather information and the vendor also stored that HR information and its changes over time so that if the number of people for example in a group changed that information changed as well so that’s important to be able to then see how the trends from the group information changes as well. Vendor C also supported a large number of license servers compared to the other vendors they supported 25 different license vendors and also vendor C had a capability that we had in our in-house tool that we liked as well that was the creation of pseudo vendors and what we mean by that is for those of you who do license management you know that some of the product suppliers will send you a license file and they’ll bundle their products together in a single license file and you need to be able to pull that data out and represent that utilization as separate products that you’re providing to your customer base and vendor C had the capability of doing that and so that we appreciated that.
[21:46] So we’ve evaluated our vendors and they’ve been collecting our data and when we felt that we were at the point where we could competently reflect against the vendors capabilities and our needs and our requirements we then did a scoring process and here’s how that went so the first thing we did is for each of the requirements we gave them a weight and the weight was scaled to 10 and I chose to do a 2 to 10 scale rather than a 1 to 5 scale and I did that personally because I wanted to make sure that the level 8 and level 10 requirements as they were met that that would weight the scale higher for those important elements and that the low requirements the twos and the fours if they weren’t met that that would then skew the score down I think that was my first goal my second goal was that it would then produce a more meaningful final score at the end for my upper management and my stakeholders for each of those then weighted requirements we evaluated the vendors and we then gave them a score for each of those requirements we then took that score and multiplied it against the weight and that was the final grade for each of the vendors for each of those requirements.
[23:41] So here’s an example and so what we’re looking at here is one of the artifacts that I produced as part of my overall report this is requirement number 36 TUR stands for tool utilization reporting and in this requirement we’re talking about the ability for that vendors product to produce a custom report our in-house tool met that requirement and that was because since I designed the system as back-end and database tables exist I knew how to then write my own SQL against my system so I met that requirement vendor A said that they didn’t meet that requirement but I actually scored them up because during the evaluation I found that they had a web GUI interface to produce SQL statements that could then be injected into the system so I scored them higher and I did the same thing for vendor B giving them a score that they would partially meet their requirement because I was able to go into their back-end database and look at their database schema and I found it not to be too complicated and I felt that we’d be able to learn that database table and then write our own SQL statements against the direct raw data vendor C met that requirement as well and had a robust system of existing reports pre-built into the system and looking back I probably should have scored vendor C as exceeding that requirement but they ended up with the 12 but okay.
[25:44] So here’s an example of another report that I generated and so this is my visual representation of the scoring and I did this so that you could get a quick and easy overview of the score for each of the vendors you notice that these are weight tens notice that I’m showing only the summary not the actual requirement you wouldn’t actually write a requirement in that kind of language but you see the vendor A and B missed meeting some of those requirements and even though they’re in our must-have the stakeholders are still going to look at that and ask themselves whether or not those are requirements that they’re going to be able to live without should we choose that tool.
[26:36] So here’s a scale on the results for all of our weight 10 requirements these are our must-haves we had 15 of them and I feel that in the weight 10 requirements that there was a set of them that were requirements that you should expect that all of the vendors in the industry should be able to do for example there was a requirement that said be able to produce a utilization overtime report and from a requirements gathering point of view that’s an important requirement to have but because of that that means that all of the vendors are going to score a little high in your weight 10 requirements but you notice that vendor A didn’t score quite as high as I would have expected them to in these weight 10 requirements and looking back at vendor A I believe that that was because they presented their data in the server point of view where we couldn’t get all other reports for a sum of our triad systems as an example we scored very high our in-house tool and that was because we built it ourselves so we obviously were meeting all of our important needs because our weight 8 requirements these are the requirements that were highly important there were 14 of them and it’s interesting to note that at this point vendor A fell significantly behind in their ability to meet those requirements these requirements were those that were beyond the basics these are the requirements where we’re really starting to add usability and meet the needs of our institution beyond just the basics these are the requirements now that we’re starting to see where they’re meeting our business needs you can see that even in these areas that our in-house tool didn’t meet all those requirements and what that represents right there is in our build versus buy report and for that portion of the study that represents the area where we’re going to have to put in investment into our own tool to then meet those requirements here you can see that vendor C did extremely well and even scored higher than our in-house tool.
[29:15] Here the results of the weight 6 requirements and in this area vendor B fell quite far behind as well and our in-house tool didn’t do very well either and there’s a large area right here where should we do the internal build solution this is going to be a significant portion of our in-house investment. Our weight 4 requirements vendor B fell behind here again the vendor C performed better than our in-house tool there are seven of these the weight 4 requirements are somewhat important requirements and there were only two ranked weight 2 requirements so I didn’t include that graph but here’s the final score you can see that vendor C here scored higher than our in-house tool you can see here that none of the vendors or in-house tool met all of our requirements so there’s an area where our stakeholders are going to need to evaluate what the missing requirements are and determine the impact on our internal processes for those missing requirements what this final score chart showed you is that there’s really going to be a decision between our in-house tool and vendor C.
[31:11] So here’s a heat map of all the requirements and what this shows you is a very quick and easy to read review of the performance of each of the vendors and our in-house tool one thing that I found was interesting is that I’d made the category for exceeds requirements and I didn’t have as many as I expected I expect there will be more exceed requirements and of course then green met the requirement yellow somewhat met the requirement and red did not meet the requirement I think what’s important about this graph also is that you can look and you can see that down in the level 4 requirements there’s that large block of red in both vendor B vendor C but also in our in-house tool where we didn’t make the required the industry vendor A did meet some of those but that’s an area where then we need to go back and look at those requirements and ask ourselves are these really questions that we should be asking because we’re finding that the industry tools aren’t answering those questions.
[32:39] Okay so we’ve evaluated the products we’ve scored them and now it’s time to then conduct the final piece of the build versus buy study and that is to compare the results of the scoring against the cost of investment and this is the result here now the purchase graph line there is relative to itself I didn’t want you to be able to look at the scoring and then determine any cost from that so the cost is obscure its relative to scale but it is represented accurately when you look at this you can get the actual feel for what the investment was going to be for each of the tools and the cost of our in-house tool represents the cost of what we expected to invest in-house to meet their requirements the cost here also I want to let you know doesn’t include the sustaining labor cost because we expected that to be a fixed cost and we expected that to be the same over each of the tools.
[34:13] Okay so the conclusion of the study so we determined that vendor C as you can see in the previous graph had more features per investment dollar in that area you know we determined that vendor C was going to be a better return on investment for our dollars than even our in-house tool was going to be we learned that vendor C had capabilities that weren’t even represented in our initial requirement set and again in the final result was that we couldn’t justify the cost differential between investing and rebuilding our in-house tool compared to what we would get from doing the purchase so the result of the build versus buy was to purchase and we purchased the vendor C product.
[35:17] And so where are we right now so we’ve just finished our phase one implementation of the vendor and that included the installation of all our supported licensed the vendors that were natively supported to the vendors product which surprisingly was a significant number of the tools that we provide to our customer base while we’re collecting our data right now we are running our in-house tool and vendor C in parallel and over time we’re going to then phase out our in-house tool as we continue the training of our in-house stakeholders and how to use the tool and how to find and understand the reports and we’re also going through the process right now of generating our personalized reports that our stakeholders are going to want.
[36:29] The process that we went through here isn’t one that I invented it’s a standard system software system engineering practice a reference for finding this process can be found in this book it’s specifically discussed in chapter 22 and so I recommend looking into the process of vendor evaluations as it relates to software requirements it was incredibly helpful to see and use an industry standard for doing the evaluation it increased my confidence in performing this evaluation and knowing that the report that I was producing for my upper management was part of an industry best practice and I want to thank everyone for coming today and listening and I hope that you found it interesting I hope that you are inspired to try this kind of a process yourself.

