hello and welcome to today's partner web conference this is fast-track dynamics
365 for finance and operations Enterprise Edition Tech Talk today's
topic task automation framework for data management presenting for us today from
Microsoft we have senior R&D solution architect Sunil Garg so that any
further delay Sunil the floor is all yours
good afternoon and good evening everyone this is Sunil Garg so today we are
going to take a look at some of the new capabilities that we are building in
data management which is around automation of most frequently used tasks
in the space of data management so before we dive into the session and the
demos a quick look at some of the objectives as to why we started to work
on this and how we see this being used by all of us as we go through the
implementations in our projects the first set of scenarios that we were
looking at us to basically help in making it easy for implementation teams
when it comes to say creation of data projects right now we have heard a lot
of feedback from all of you and our field in general around lack of tools
when it comes to managing configurations for data projects managing
configurations for recurring schedules where teams have to go and create these
data projects and recurring schedules on every environment as they go through the
implementation phase now for large projects it is very difficult to see you
know 5,200 data projects that teams end up creating and if they don't have or if
they have not found creative ways to automate that you know they end up doing
that manually so there is a lot of time that teams spend in trying to do that
and also today we don't have a good story around ALM for managing
such configurations as well so this was a scenario we you know we wanted to
address as we work through this task automation and we will take a look at
how this actually unfolds in in a quick demo towards the end of the session the
next set of scenarios is around basically importing data packages into
data management without having to manually create the data project without
having to manually you know dealing with the data packages in the data files that
needs to be important right now LCS there is an opportunity an option in
LCS where somebody can go and deploy the data package from LCS also now this is a
alternate solution a second option if you will for implementation teams to
explore and see how this this option can be useful in addition to what we already
have from the LCS side so things like you know demo data setup so once the
environment is deployed then if the environment needs some test data then
typically teams go and import the demo data packages so we'll see how this
option kind of streamlines that process and along the same lines you know it can
be any data package right it doesn't have to be demo data package it could be
the golden configuration data packages that needs to be applied as base
configurations on a brand-new system that was just deployed and also when it
comes to data migration and I will talk a little bit more detail about data
migration scenarios as we go through the
presentation but basically you know identify tasks from the data migration
plan that can be automated using this new capabilities as it is applicable the
next set of requirements and objectives was you know around the test automation
of data entities itself and integration scenarios right now we are going through
you know we know the challenges that you know teams go through because we go
through the same pains and you know we recognized that you know
even the implementation teams the partner teams that is working on
extensions and working data entities also should be going through similar
themes when it comes to validation of data entity functionality and validation
of integration functionalities right now show teams go through you know testing
these scenarios manually but if there is a way for us to help with automation of
such scenarios then it should definitely help in the overall process and it
should definitely help in the overall turnaround time when it comes to
validations of you know data entity specific scenarios through
implementation so we will take a look at how the task automation can be also used
for automated testing and how there that that plays out so we will take a look at
that as well so these were some of the core objectives we had in mind when we
started to work on this specific framework now I'll dive a little bit
into the test automation objective and to put things in context when it comes
to data management scenarios there are different buckets of configuration if
you will that somebody has to do and if we put all these configuration into a
publication in combination matrix then all of a sudden the scenario matrix just
explodes and if we multiply that with the number of data entities that
somebody has to test based on the functional footprint for the
implementation project then you know the number of test cases if you will
somebody can identify it becomes you know quite staggering so to put things
in context the challenges around why it becomes so difficult to test and manage
is basically around the behaviors that somebody can go and configure in data
management now this is all flexibility that the data
platform framework provides but at the same time this also provides challenges
in terms of validations so we start with an entity behavior you know there are a
bunch of file formats that that are supported out of the box and every file
format I really needs to be tested you know at least based on what the
implementation teams wants to use and going forward there are other entity
behaviors like for example change tracking so this change tracking wrote
for your entities that that should be tested in under different permutation
and combination again and does the entity behave well if you want to push
the data to your bring your own database flows there's the entity behave well you
know again full push versus incremental push in the context of change tracking
and also the the concept of having one excel file which can have several
worksheets and you can have one entity map to a specific worksheet if somebody
wants to bring such worksheets through for import for example does it work does
it work from recurring integrations does it work from the UI right so if if teams
are looking to use a specific functionality then you know they should
be able to validate that this works in all the permutation and combination of
scenarios and going you know forward on the entity behavior itself you know
there are a few other settings that somebody can configure I'll call out the
parallel processing settings and I'll also call out the entity mapping
settings that somebody can do on the mapping itself should the value for a
column you know default or filter value for the column be auto-generated using
number sequence now these become quite significant in the context of data
migration right so teams can go and configure these these you know detailed
rules but when somebody is migrating thousands and thousands and even
sometimes millions of records as part of my great
then the task of validating did the data come in correctly is a huge task my team
spend you know weeks depending on the size of the data migration just to
validate the data now our goal as we go through the journey is to help ease that
pain as well when we are looking at ways in which the data can be validated right
out of the task automation manager itself just to give a quick idea on the
quality of data that has come and by no means this should not be a replacement
for all the testing and validation that the teams do but the hope is you know a
quick validation whether the data looks good or not can go a long way in terms
of helping save time for implementation teams last but not the least on the
entity side are the entity filters so you know we should be able to define
filters on entity which we can today but we should also be able to validate as
part of the mod will QA process for implementation teams in an
automated fashion of course then moving forward are the entity types right so as
we all know there are simple entity is composite entities and self based
entities so again depending on what Microsoft ships and depending on what
the implementations team starts building on their own they should be able to
leverage this automation capability to test all around for whatever entities
that the team is building the entity data also plays a good role in the final
outcome of import/export process right simple data versus you know self
referential data behave differently so it is absolutely important for for
teams and including us at Microsoft before we ship something to make sure
that no entities behave well with different kinds of data so these were
the challenges that you know we are going through and and we are
recognising that you know this should be a challenge for a partner teams as well
and hence this framework should certainly help going forward the entity
behavior was one aspect of it the other big aspect as you know data projects so
data projects also has their own or its own parameters and switches that
somebody can turn on turn off to change the behavior depending on the
requirements so this again put in context creates its own flavors and
hence challenges around you know how can somebody make sure that a entity that
somebody has developed you know will work across the mall there are other
conditions also again from a testing perspective reliability is a big piece
on top of functional and performance I can a specific operation we performed a
thousand times will it still continue to work can we push you know a thousand
messages per minute or per hour to the integration endpoint and how will the
system behave you know these are aspects that that is interesting for us from a
microphone perspective but this tooling will be available to you know everyone
right so depending on how complicated your customers projects are you know now
you will have these tools in place for the implementation teams to also
validate exactly the way you know we will be validating or we are validating
using an automated fashion not just manual fashion again cloud versus
on-premise that is also a pretty significant parameter right so right now
the framework is supported for cloud we are going through the validation for our
plan but it will take some time for us to make sure that this automation is
fully functional for on-prem but that is absolutely the goal that you know the
same framework should be possible should be available to perform automation of
general and also for automatic testing in
on-prem as well and then finally the the other most important aspect of any
automated testing or any automation is validations so it is great that you know
we are able to define automated tests very quickly as we do see but also the
other challenging aspect in data management particular is to make sure
that whatever was performed whether it is import or export it actually
succeeded now there are different points of failure all along the way and hence
you know it becomes quite challenging to make sure that everything has passed for
a specific scenario I will call out the data corruption aspect of validation
right and this is quite central from a data migration perspective also which I
talked about which is did everything come in as expected right now this is a
very simple question to ask but then once we start to answer this question we
run into all sorts of challenges and complexities simply because there are so
many options that somebody could go and configure because of the flexibility the
framework provides that it becomes quite challenging to to make sure that we are
able to validate if the data came in correctly or not so this is an area that
we have not started to work yet simply because it has been a challenge for us
to you know figure out a consistent pattern that that can be automated from
a valuation perspective so we will certainly you know welcome feedback from
all of you as you start in the plane with this functionality you know your
feedback is always welcome in terms of how we can make this happen and last but
not the least when I say specific settings what we mean by that is based
on what settings were done out of this 50-plus settings that is there on the
slide for example if calculation was enabled
we need to make sure and validate that translation actually go right so that's
what we mean by specific settings based on what was setup the automation
framework yes aware of those settings and it will
go and validate by itself didn't have setting take place or take effect or not
so that is kind of the context for laying the landscape around why it will
be challenged it is challenging for you know validation of data HDS and
integration scenarios from data management perspective and how the
automation framework can be handy to help in this projector moving forward my
slide has so from a framework design somebody these are the high-level
principles and guidelines that we have and we want to follow from this
framework perspective the first one is built into the product right so this is
not something that somebody has to download as a separate tool or it is a
github it is basically built into data management itself which makes it
available for everyone and which makes it available on every environment so you
can use this on a single box dev machine or in any peer-to-peer XP or any
environment including production environment now we will talk about do's
and don'ts on the production environment but in general this is available for
everyone and anyone who has access to data management so that was our one of
the core object principles if you will that is what we have been sticking to
the next one is local low code meaning it should be possible even for a
functional consultant to go and define a task that needs to be automated without
writing a line of code so that that increases the usability of the framework
itself and that makes it very agile from from a usability perspective and we take
a look at how that unfolds and what they mean by a declarative authoring of tasks
the next piece is you know the the handling of data itself whether it
is a data package or data file but most likely data packages were to know we all
use the handling of the data package in the context of automation should be very
seamless but we should not be you know passing around data packages and within
the team members for automation purposes so basically the approach this has taken
is to simply created with LCS so you know you can think clearer ethically
define for a task as to which is the LCS project that the data package needs to
be pulled from or which is the or if it is a shared asset library then it you
can do that also so this enables scenarios where a central team can
manage a group of functional consultants can manage the data aspect of it
and define the tasks or the tests depending on what is the objective and
then the rest of the team can just start using it even if the team is spread
across globally it shouldn't matter because the data is centrally located
and the teams can just start using it without having to worry about know which
golden configuration package to use or which data migration package to use it's
all there in one place the next one when we say aut resources is purely from a
dev ALM perspective so if we take a take a look at the configuration of data
project as a scenario then very soon one realizes that it will be really nice if
the configuration management of data projects and recording schedules was
integrated with the dev ALM it's the overall ALM itself and that is where
what we mean by IOT resources is the manifest that somebody will write for
these tasks to be automated the XML file can just be checked into the source
control right it can be a resource in aut and the task automation manager once
it is deployed to the to an environment it can read the resources or specific
kind from the front itself from the resources itself this makes the
end-to-end process very seamless there's no passing around of you know
manifest if you will it's all locked through the ALM process itself and this
also opens up the doors for us at Microsoft to share you know key test
cases with with all of you which you can you know take it and you can run before
you jump on to a certain platform or before you you know if you want to
quickly assess the quality of you know data management itself so that's the
bill that we are working towards the next one is supports loading of manifest
from file system as well so sure I mean if you want a quick run of your test
then you can or a task you can just you know loaded from the file system also
last but not the least again you know we talked about validations but building
rich validations is is a code 10:10 work because the ultimate value proposition
the ultimate success for any task automation is to give the confidence
that yes it passed but here are the reasons why you passed or it failed and
here are the reasons one fail the more rich validations we are able to build
into this the more a value of implementation teams can derive out of
the framework itself now let's take a look at the low code no code concept and
how does manifest will look like so at a high level the XML manifest has two big
groups one is the share tab group the share setup is where all the
requirements around what data to use what is the job configuration or the
data project configuration and what is the entity configuration that needs to
be applied for a specific task or a specific test is defined in the shared
setup itself then comes the test group this is where all the tasks are defined
and we will take a look at how that specific element looks like now if we
expand the shared set of so where the data is defined now if you
look at the data definition itself it takes the except type and the asset name
and most importantly it takes a LCS project as an attribute of the LCS
project ID can be blank and if it is blank it means it is the shared asset
library and if it has a specific project ID then the frame-up knows to go and
grab the specific data package from that projects asset library so this way you
know what can manage the data aspect centrally and as long as the user
running the automation has access to the LCS project they should be able to just
run without having to worry about where the data is coming from the the next
piece somebody will define is the job definition and this is how the
definition of a data project would look like and this basically has all the
parameters that is to be possible to be configured on the data project so as we
add new capabilities to the data management itself automation needs to be
in sync so you will see this list of elements you know expanding or shrinking
also depending on what the data management features do so for example
you know what can define what the operation should be whether it is an
import project that somebody wants to create or a export project you know skip
staging versus calculation and then the next important element here is the more
so should the import be from the should be done as a sync input like somebody
would go through the you are just clicking the import button or should
just be done through the recurring API is using the NQ API for example so that
can be specified whether what mode the operation has to be performed in now
from a recurring recurring import or recurring export perspective this is
something that we would not encourage to run on production because the intent for
having this capability is not to implement integration scenarios using
this but this is purely to have a quick way to validate that all
your plumbing is in place and you can quickly do a test to make sure that yes
integrations are working and you know everything else underneath that is
working so this is a quick way to validate the infrastructure setup to
validate all your integration points are not configured correctly this is my no
me no way it should be you know used to implement production integration
scenarios as such and this will be called out in the wiki as well as we
roll out this functionality in PU 16 but something to keep in mind on that front
similarly you can configure a task to be configuration only and this is where the
configuration management comes into play where you can see they go and create
this data project grow and configure the recurring schedule and this is a
configuration only task meaning do not go and start the operation just create
the project and come back so this element comes quite handy to enable
those kinds of scenarios then you know you have the flexibility to define the
batch aspect if it is a recurring type of operation so you can define all the
batch to related parameters you know what is the frequency in the number of
times and you know what is the frequency to send messages to the in point so this
is where the reliability testing comes into play where if somebody can
configure this you say that hey send 10 files every second or 10 or send you
know one file every 30 seconds whatever that frequency is you know it can be
just you know dialed up or tile down and that way it gives quite a you know quite
a powerful tool at hand to quickly go and test these scenarios you know
automated fashion so I will not go through all the elements here but
basically that gives an idea as to how the data project setup will be done so
the next piece is the HP setup itself on similar lines that we have seen so far
all the all the entity behavior is also defined as elements here you know what
format weather fucking should be set to yes or no
should just be published to your bring your own database if you have the
scenarios you know all the other aspects of the entity behavior can be defined
here including the payroll processing settings right now the other key piece I
want to call out here is you know typically when we use data packages
there are you know of course there are more than one entities but in general
when we deal with demo data packages or even bigger implementations you know
they may have you know hundreds of entities at least the demo data packages
that we ship they go all the way to you know 400 plus entities in the same data
package so you know it will be quite flexible to have a concept of a wild
card on the entity set up where we say that you know this specific settings
applies to all entities in the data package without having to explicitly go
define one at a time so that is where the the star in the entity name comes
into play where this is a wild card and apply all the settings to all the
entities in the package but at the same time if you give a name like entity name
equals say currencies and then define certain behaviors then the framework is
smart enough to understand that for currency entity I am going to apply
these specific settings and for the rest of the 500 entities in the package I
will just apply the generic settings which is indicated by the Vulcan so all
of a sudden this this gives it you know good flexibility to quickly enable so
many scenarios in a very short time using the same data package without
having to break it up into individual files and whatnot so on the right hand
side screen if you can see you know there is the concept of mapping detail
so that is where you can go and define entity specific mapping details as to
should that specific column in the 1080 be auto-generated or or defaulter and so
on and so forth now building validations for those is quite challenging and that
is what I had I was mentioning earlier that we haven't
really started on the data validation side of it because because of these
challenges and because of the fact that you know the input format can be in
several different formats Excel or CSV versus you know you name it and it is
quite possible there so trying to come up with a canonical form that can take
into account all the formats that can take into account all the business logic
here and also the business logic that that that can be there in extensions as
implementation teams extend the entities right so those are the challenges that
we are trying to figure out how can we account for all of this and still be
able to provide a very reliable validation capabilities so the final
section in the manifest is the task itself so after having defined all the
setup information for the data for the project and follow the entities it's
time for us to define those unique tasks that can use those set up that can
inherit those set up and then also override at the tasks level to create a
unique task behavior right so this is a sample task here where you know I just
have a task here that will import the chef setup demo data package now if you
notice here the job definition is actually overriding the war in which the
input has to happen in the in the shared setup you know the common setup it has
recurring batch because if the expectation is you know most of the
tasks in the manifest will use the recurring batch and then have that in
the shared setup but this specific task wants to use you know import a sync
basically somebody clicking the import button in the UI right so if for
addressing such requirements the task can override specific element at the
task level and that opens up a lot of flexibility in terms of how one can
leverage the shared setup there by not loading the manifest and keeping the
manifest in a very manageable form as we'll be able to get all the flexibility
now there are some attributes here I wanted to talk about for example the
repeat count the repeat count is not implemented yet
but the idea and the objective is to enable repeatable scenarios for example
if somebody wants to execute a task 10 times then they should be able to
perform that by just setting the repeat counter stem and the framework will just
do that ten times now it can be used for configuration purposes it can definitely
be quite handy from a reliability testing perspective the next attribute
here is trace parser again this is not yet implemented but just to share the
vision and just to share the objective you know we would like to have the trace
parser also integrated as part of the automation framework so that if if
certain tasks have taken time now this is in the context of say testing itself
right so let's say you have deploy your UAT and you are running a bunch of
automated tests using this framework and if trace parsing capability was inbuilt
then you can just you know enable or disable based on a certain task if you
are importing vendor entity or customer entity then you definitely want to make
sure you have the traces captured you know at that time because if you see
some performance issues then the traces are there right there
you don't have to run the test again to figure out ok let me Nick in tracing now
and then the scenario so the idea is to make it as easy as possible and also use
this to have a QA bar defined for every permutation projects to make sure that
before somebody goes into unity or before somebody goes into production you
know they have gone through this process to validate and to vet out the entity
related scenarios and then the time out is you know quite obvious which is you
know you can define maximum time the framework has to wait for a task to get
completed because you know because of various reasons the task may take more
than expected time that clearly indicates a problem
but at the same time you know we don't want the rest of the task to be just
waiting just because the previous task has never completed so you know one can
define a time out on the on the framework will just you know time out
for that task and continue with the nest next task so now we can quickly jump to
see the demo let me quickly share so so this is the
DS 65 fo environment and let me quickly show you so this is a brand new
environment it doesn't have any data in it and this is typically you know
implementation teams will start when they deploy an environment and which is
why I'm choosing the same in a flow so this is the new task automation here if
I open it now we see the task automation manager and these are the different
options for related to the manifest itself so one can note the manifest from
the file system as we talked about and I have these different manifests created
here just to share the vision just to share the concept behind a largest right
you know once you start playing with those you know there will be questions
around you know what should constitute a very first how many manifest one should
have what are the best practices to define the manifest right so examples
here are you know if we look at data configuration then you know I have a
separate manifest for data configuration itself and this manifest can be checked
into a source control and this becomes my ALM story this is a configuration
management process that we can enable we are enabling here and from what
testability perspective teams can choose to have you know checking tasks
regression tests reliability tests and so on and so
forth right so these are different manifests
and if teams want you know they can have a demo later set up manifest also that
defines which packages should be set up and it can have a go to configuration
data package inside it referencing for the tasks to be used so this way you can
see you know quickly you know how teams can certainly have more than one
manifest and how the manifest since it's just an XML file you know you can
subject it to source control and it can just flow through your ALM process as
well I can certainly see you know teams having data migration related manifests
where you know based on how complex and how big the data migration plan is there
will be tasks in the data migration plan and that are about you know importing
data in the system so that can certainly be taken out and automated using the
task automation here and it can be in its own manifest so that way teams know
you know what tasks to perform which is just automate at once and you can just
run it over and over again so now let me quickly load the data product for
example
so now all the tasks have been loaded from the manifest and these are all the
columns that you see here are the attributes and the elements from the
manifest itself now going forward on the manifest discussion this is where the
manifest can be also loaded from the from within the build itself and this is
where the aut resource concept comes into play so now here I only have three
manifests that are checked into my build so which is why I am seeing only these
three manifests so this is another way to load the manifest which the way we
are seeing this will be the most primary way to load manifests as it it provides
a good process from a perspective now somebody can download a manifest also
again this download is for the manifest that is there in the build itself and
this is how you know you will be able to download the manifest that that we ship
as part of platform update very soon so the the key test cases that we want to
ship will be this in this mechanism where you can either quickly load the
manifest that we should run the run the tests just to see just to assess the
quality of data management at the same time you can also download the manifest
just to get a quick ready sample and you can change that to your requirements
that way you get a head start on to the manifest itself so now we have you know
loaded the manifest about data configuration meaning if I run this it
will only create the data project it will not actually do the import of data
because that is what I want to schedule before I run that you know this is a
clean environment there are no data projects yet and the other key aspect is
I am in the DAT company meaning I can select all and I can run the task and
even if this was an import related task and I want to import data in different
company like HQ u-verse plcs NPR FB as a user I
really don't have to switch companies which today we have to because you know
you have to be in the correct company to input the data in that company but the
task automation takes legal entity as an element in the manifest and it uses that
information to know which company it has to import data in hence this is another
improvement to the overall process where you know you don't have to babysit the
demo data our input for example you can just select our and start run task and
then it will take a couple of hours to import all the data you can come back
and data is there in all the companies right now if you had you know 30 40
companies for example then this becomes quite handy in terms of not having to
deal with 40 companies by switching one at a time so I just from the first one
for now in six of time and when I say run task so it's asking me to
authenticate access to LCS because the beta packages are in LCS so it's telling
you know click here to connect your LCS so the authentication is successful so
now I can go back and I can say ok so it started to execute so basically behind
the scene it is talking to LCS it knows which asset library which asset type and
which asset it needs to download it is downloading that and then I'm sure by
now it would have started to create yeah the data project has been created and it
is actually loading the data package into this data project so we can quickly
check this after a minute once it is completed but basically that is what is
happening right now now the other thing I want to call out us this dialogue that
you see here is an indication that this is a blocking call meaning this so
behind the scene is using the SIS operations framework
which means that you can run your tasks for hours and hours at the same time
without having to worry about the session timeout issues so the reason
this is important definitely is from a testability
perspective where you know it will be possible to run scenarios overnight and
then come back in the morning and to see the results right so even if data
migration for example you know certain data takes about you know three hours
four hours of import so if somebody was going to automate that using this you
know the session timeout is no longer going to be a problem and you know you
can just keep on running it so that is what is happening here right now if I
want to cancel I can always cancel so that specific tasks will be canceled and
then I can come back and start the next task for example but basically you know
it will keep on going one after the other so now this is completed and if i
refresh it says the status is completed and the result is passed and I can go
here and if I quickly refresh so it only created the data project it did not
create any execution job because that is what we wanted it to do and now if I
load the project so it has the demo data chef setup package all loaded all the
entities are here and I did not have any specific configuration for the entities
but if I had it would have gone and configured all of those as well so it
becomes quite easy to manage the configuration now using this automation
capabilities so I can close this here
and coming back here so now I can select another one and I can run it or I can do
selected I can run everything now about validations so this is where somebody
can go and check the validations so this was a configuration project only so
there was no data that was imported and hence there is nothing here but if the
data was imported then the different validations will show up here as to
whether it passed or failed and as we keep on adding more validations you know
this list will continue to grow and this becomes a quick way for somebody to
validate if the data came in correctly or not so that's what I was talking
about in the presentation so I think that's what I had to present in demo
today so I'll stop now and Janice we can open up for Q&A all right thank you
Sunil all right ladies and gentlemen we're gonna be starting our CUNY portion
here in just a moment but before we do we do have one polling question that
we'd like to ask I'm gonna go ahead and bring that up on your screen right now
should appear on the left hand side of your screen it says would you like
Microsoft to share test cases for data management via this new automation
framework to allow you to validate the platform updates for data management
functionality go ahead and give a couple moments for people to make their
selections and in one moment if you said yes we're gonna have you leave your
contact details so be sure to hang out while we do that out it's going to be
just a couple more moments while people read through that question and make
their selections
all right getting great participation thank you so very much
just a couple more moments
okay I'm going to go ahead and move on to the next portion if you had answered
yes to that question go ahead and leave your contact details in that white text
box and click Submit and while we're gonna go ahead and leave that up there
for you while we start Q&A so Sunil we have a couple questions here the first
question is what is the use of fail on execution unit level sequence so the use
case there is you can actually manage the configuration of that in the context
of a data package so you don't have to you know go and change that again and
again from the UI itself so you can just manage that from the manifest level as
part of configuration management and second you can you can then extend that
from a testing perspective also where you can if you would like to test if
that setting is actually work or not
great our next question is is a framework publicly available what is the
minimum D 365 version that its supports so this is another data management
functionality right so this is nothing different from any other dynamics 365
for operations features right so this is part of the product and it will be
public it will be available starting to you 16 so if you are part of the early
access program then yes you will certainly get access to this also
because this is just part of the standard great and our next question is
are all possible elements of possible manifests described somewhere in a guide
or manual yes we are in the process of authoring a wiki so it will be published
once tu-16 is available so that will you know you can look at that as the
manifest Bible that will define each and every element
and why should it should be used and how it should be used and we also will have
best practices defined for creating a manifest and managing a manifest as well
great and our last question here that we have in the queue is can this task
automation be triggered from V STS build that is a great question the short
answer is not currently but that is definitely we want to enable because we
can certainly see that you know even from an internal perspective and even
from our partner and implementation teams having a command line interface to
this will be super useful as we can then integrate with the build process so
definitely yes we will do that but in the first version that is not available
okay and we had one more question popped in here could you clarify where the task
automation sits in comparison of PDP another good question so like I was
trying to clarify and bring the differentiation during the presentation
I think if implementation teams have a requirements around using PDP's which
basically means you would like to manage data by business process and having that
mapping then LCS is certainly the way to go forward and continue to use that
functionality but if implementation teams just want to manage data packages
and import data packages then this is another solution based on again this is
all need-based so what makes it easy for implementation teams they should use
those tools we are just looking at you know giving options and based on the
requirements you know you can go ahead and choose which tool to use all right
we had another one popped in people keep popping on in so we still have a couple
minutes left folks so if you have the questions be sure to
to pop them in here and we'll get to them the next one is would you recommend
using the framework or LCS for go-live scenarios
I would I would depending on the requirements right so for example if you
are looking at configuration management then definitely yes you should use this
capability if you are looking at validations which we will you know we
will make it part of the go-live checklist and we are going to make it
part of the methodology as well then this is the way to go in terms of
validation of scenarios just to make sure that a quick sanity check has been
done before you know we go live so certainly this is the way to go in terms
of validations and in terms of configuration management yes we're
looking for a way to test import and export composite entities preferably
preferably by unit part of the V STS build can this help in that as well it
can certainly help once we have the command-line interface that can be you
know hooked up with the build process in bsts so once that is available you will
certainly can and that is the only reason why we will enable it because
there is so much value in integrating this with the build process as well all
right next one can you clarify how the framework helps with the story of data
validation slash testing for data corruption
for example after an upgrade so if I understood the question so the key piece
here to understand is the import has to happen through the automation framework
that we just saw right so when you say upgrade if it was a data
upgrade then it has happened offline if you will right so at that point the
automation framework cannot come into play to figure out if everything
happened correctly or not so Hinson
as if you had hand created the data projects and you had manually imported
as you are probably doing today then there is no way for telling the
automation manager that hey go and look at this data project and then inspect
that data project and come back and tell me if everything passed or failed now
that is a good idea if I were to generalize that any validations that we
are building on this automation platform you know it should be available on
regular data projects also that that brings a whole next level of flexibility
there even existing data projects can leverage this capability but we are not
there yet but so the short answer to your question is when you talk about
strict data upgrade then no this cannot be used to validate data from a data
upgrade process okay next question
is there a way to modify the data packages before import like we do with
PDP I mean that has to be outside of in the date the automation manager and
automation framework does not provide any capabilities nor should it provide
any capabilities for manipulating data packages that has to happen outside and
once the data packages are ready then that can come in all right and that was
our last question that we have in the queue you're gonna take a moment here we
do have a couple more minutes left so if you guys have any questions out there
that you'd still like to ask make sure to pop those into the Q&A panel we'll be
sure to address them in the meantime while we wait for those to come in I
would like to take a moment and bring your attention to a link that I posted
in the messages panel that's a link to a short survey for this web conference and
we ask that you please take a moment before logging out to access it we hope
that you found today's information helpful and if you enjoyed today's web
conference or have feedback on how we can provide you with a better event this
is your chance to let us know the survey scores are on a scale from one to five
with five being the highest score possible and we did have one more
question pop in and it is will the storage of the data packages you moved
from ICS to VST s in the future
again a very good question something that we have talked about to definitely
make the build automation quite nifty and quite quite flexible so yes you know
we definitely want to look into that aspect as well but right now it is not
there all right and that's all the questions we have in the queue right
right now Sunil you have any final thoughts before we wrap up I just wanted
to thank everyone to take the time and spend the time with us to learn about
these new capabilities we are excited to to have this capability and we're
certainly looking forward for all of you to use and give feedback and again
thanks for all the feedback on the Yammer group
I enjoy discussing things with all of you there so please leave your comments
and let's continue the description of the Yammer group so thank you all right
Thank You Sunil alright ladies and gentlemen that is going to conclude
today's web conference attendees can access the web conference recording by
the same registration link used to attend today's live broadcast I'd like
to extend a big thank you to our presenters meal and thank you audience
for logging in and joining us today
Không có nhận xét nào:
Đăng nhận xét