Aller au contenu
  • billets
  • commentaires
  • vues
    23 162

Billets dans ce blog


Project Estimation Best Practices: A Look at Our Three-Point Estimator

One of the biggest risks in undertaking a big software development project is underestimating the time and effort involved from start to finish. Our team has provided estimates for hundreds of projects, and sometimes we’ve missed the mark, especially in the early days of our business. Over the years, we’ve put together a strong model for estimating how much time and effort a project is likely to require; our employees undergo annual training on the method. Using these methods ensures we present our clients with the most useful estimates possible and effectively set expectations for each project. Not Just Simple Math When you think of hours estimation, you probably imagine a small team of people putting together a list of tasks and then assigning hours to each. From there, it’s just simple addition, right? From the outside, I’m sure that’s what the estimation process seems like. Behind the scenes, however, things work much differently. Building Uncertainty Ranges No two projects are exactly the same. Even if you feel like you’re developing the same solution, there are too many variables at play to borrow from another experience to build your estimate. Comparing one project to another can be good for getting an order-of-magnitude or “ballpark”-level estimate. However, you wouldn’t want to make firm commitments based on just a rough comparison. Adding Non-Development Factors Not all of the effort in a project is spent by developers. Our project management team works hard to ensure communication remains open and transparent and keeps projects running smoothly. They need time to accomplish their tasks. Especially going into a new client relationship, no one knows how much time and effort that will take. Some of our clients appreciate a daily update email and have few questions. Others like detailed explanations on how things work behind the scenes. Projects also vary widely in how much support they need; for small projects with a single developer, a project manager may stay a little behind the scenes, making sure the developer has what she needs and sending periodic status updates. In larger projects with multiple developers, multiple client contacts, and a multi-step testing process, the project manager is heavily involved every day. He or she leads scrums and meetings, facilitates communication and monitors overall project risks. Besides project management, an estimate should include time for meetings, both internal and external, for quality assurance activities and for work necessary for deployment, to name some of the major non-coding categories. An estimate that fails to budget time for activities like testing, internal and external communication, and project management, may only come up slightly short for a smaller project. However, it will leave a major gap for a large effort. Foundation Phase Estimates If a project requires up-front design and discovery work (and most do), you also need to account for business analysis time. During our Foundation phase activities (also known in the industry as “discovery and design”), our business analysts work with clients to uncover their biggest challenges, opportunities, and goals and then build a plan to address the highest-value features and capabilities during solution development. As we wrap up design and discovery, our developers build a more accurate estimate of hours based on their lessons learned and the emerging solution blueprint. (In fact, we often provide mid-foundation in addition to post-foundation estimates to help put our clients at ease and promote the transparency required for successful projects. Each one gets more and more accurate and therefore encourages better client communication.) Consider Task Size When estimating items, smaller is definitely better. Most of us can’t reliably estimate pieces of effort that are bigger than a day or two in size. If your estimate has an item that says “manage invoicing, 20-50 hours,” you’re going to be well-served by breaking that into smaller units of functionality. This can include “Create Invoice,” “Print Invoice,” “View List of Invoices,” and so forth. If the project has several types of invoice, each with its own print format, then you can probably break “Print Invoice” into smaller items, one for each invoice type. If it’s too soon to break that item maybe it’s too soon to estimate it; or, if 20-50 hours “feels” right, just be aware that you’re estimating based partly on intuition. Make sure your estimate has a range that allows for the fact that there’s more definition yet to do. Number of Estimated Items The number of estimated items matters too. The fewer tasks you’re combining into an estimate, the less meaningful your estimate is. Nobody estimates items right on the nose. We’re either over or we’re under, for any given item. With a large number of items, we hope that the overs and the unders balance out. Too few items, and your estimate risks being derailed by a poor estimate for a single item. We abide by the rule of ten: estimates containing fewer than ten items are very dubious. However, the farther over 10 items you go, the more stable your math gets. For best results, try to have no fewer than twenty. Size of Estimate Ranges Just as important is for all of your estimate items to fit within a certain range of sizes. Huge disparities between your biggest and smallest estimate items suggest either that the small items have been broken down prematurely, or the big items are not yet broken down enough, adding uncertainty to your estimate and the project as a whole. As a rule of thumb, we try to have a range of no greater than 6x between the smallest and largest items in an estimate. Incomplete Tasks Lists Estimation is all about finding something to count; your estimate is only valid to the extent your list of work items is complete. If you’re missing 20% of your tasks, it doesn’t matter how good your estimates for the rest are. You can’t possibly know how many hours you will need for what’s not there. This is why our team prefers to make multiple estimates and only feels confident making commitments based on an end-of-foundation phase estimate. Once we’ve gone through a thorough foundation for a given body of work, we feel pretty confident that there are no major items that are completely missing from our list. Using the Right Tools To help our team estimate projects more effectively, we built a tool to put these ideas into practice. It uses some simple statistical math to create a master estimate out of a collection of work item estimates. These estimates could be generated by a single developer. Or, in a project with multiple people doing different kinds of work, the estimates could come from separate estimators. The tool can merge the estimates of multiple estimators. Just remember that, as with all estimation, the best estimates are created by the people who will actually do the work. We believe successful businesses are built on efficient solutions, so how could we not build one for ourselves? We call it our Three-Point Estimator. What is the Three-Point Estimator? To provide our clients with accurate estimates, we built a custom solution for our projects. It starts with a set of line items representing individual work items; usually, these are user stories, though occasionally they represent necessary technical tasks. Estimators give each item three different estimates. One number represents the “expected” or “most likely” case — if everything goes reasonably well it should take about this long. The other two numbers are a best case (if things go perfectly) and worst case (I can’t imagine it taking me longer than X). The algorithm then measures the span of this range. Wider ranges indicate more uncertainty in the estimate, while smaller ranges indicate more confidence. Finally, the algorithm combines the line items estimates into an overall estimate. Just like the individual line items, the overall estimate is a range. Generally, we don’t consider overly optimistic scenarios. We focus on project outcomes that, according to our tool, have a 60-90% chance of occurring. Sharing optimistic scenarios that only have a 1 in 3 chance of coming true, say, doesn’t serve the project well. Our final project estimate, therefore, considers not only the sum of the starting task estimate ranges but also factors in the uncertainty surrounding each item. This paints a clearer picture of what the project could require. Of course, we are always working toward more concrete numbers. The more work we do on a project, the closer we get to accurate numbers. This is why we often have a project-start estimate but provide a much tighter estimate following our Foundation phase. We’ve learned enough about the work through extensive conversations with the clients and end users to minimize uncertainty in our estimated ranges for hours of work required. Building Your Own Estimates During your project estimation, I recommend keeping these principles in mind: Your biggest risk is not in mis-estimating an item on your list — it’s in having no estimates at all for items that aren’t on the list, but should be. Good estimates start with lists of at least 20 distinct items. Estimated items shouldn’t vary too widely in size. Budget for non-development activities like meetings, interviews, project management, user acceptance testing, training. Then have a group meeting for a frank discussion around uncertainty in the project. Consider big gaps in estimates and what you need to know to reduce them. Then either track down that information or discuss and plan for best and worst case scenarios. Good luck with your next project estimate! Let us know if you have any questions in a comment below. The post Project Estimation Best Practices: A Look at Our Three-Point Estimator appeared first on Soliant Consulting.
Voir le billet d'origine

Passing Script Parameters: A Complete Summary from Simple to Complex

What Are Script Parameters? Script parameters are one of many tools that all FileMaker developers should learn to utilize when developing custom FileMaker applications. Numerous benefits include writing fewer scripts, increased code reusability, and improved coding organization. By designing your scripts to pass parameters, you can use a single script with logic branching (i.e., IF, ELSE IF, ELSE) to perform multiple actions within your FileMaker solution. For example, you can create a Sales Order record in your FileMaker solution. You can create a Sales Order from a few different contexts in your solution, such as from a Customer record, from a previous Sales Order, or from a cancelled Order. Often there are similar requirements when creating a Sales Order from each of these contexts. Maybe you need to know the corresponding customer for this order, or maybe you need to know shipping address. By passing script parameters you can use a single script to create a Sales Order from various contexts without having to write and maintain a separate script. There are many ways to format and pass scripts parameters in FileMaker and in this blog post we are going to start from the beginning and work our way up to the various ways we can pass multiple script parameters. This blog post borrows heavily from Makah Encarnacao's FileMaker DevCon 2018 presentation, so be sure to check out her video, slides and example file for more details. Single Script Parameter FileMaker provides one primary method to pass script parameters into a script. This is done via the "Specify Script" dialog available throughout FileMaker. Adding a script parameter is done by simply typing into the "Optional script parameter" field at the bottom of the Specify Script dialog. Anything you type into this field will default to quoted text, unless FileMaker detects that you have entered a specific field or function. One can also click the "Edit..." button to the right of the "Optional script parameter" field, to open a calculation dialog and provide a more complex script parameter. In most of our Multiple Parameter examples, we will be using this calculation dialog to determine the parameters we will pass. Figure 1 - Optional Script Parameter Once we pass a single parameter through the "Optional script parameter" section, we can retrieve that value in our script using the Get(ScriptParameter) function. We will use this function throughout this blog post. Multiple Parameters with Return Delimited Lists Sometimes, one may find that a single script parameter is all they need to send to a script, but often, as we develop more complex solutions, we find the need to pass multiple parameters into our scripts to yield the desired result. The simplest way to send multiple script parameters is by using the pilcrow/return character (¶) or List () function to pass our script parameter. For example, in our Specify Calculation dialog we may enter the following parameters to pass information about a specific manufacturer in our solution: MNF__Manufacturer::Name & ¶ & MNF__Manufacturer::Headquarters & ¶ & MNF__Manufacturer::NetWorth & ¶ & MNF__Manufacturer::DateFounded This will allow us to pass four lines of data from a Manufacturer record to our script. Inside our script we can separate each of these four lines into their own local variables by using the GetValue() and specifying each line in it's own "Set Variable" script step as shown in Figure 2. Figure 2 - Pass parameters with a return delimited list This method of passing multiple script parameters does have potential drawbacks. For example, if any of the fields from a record that you pass as a parameter has a return character in it, used when passing an entire paragraph as a parameter, it can potentially throw off your "Set Variable" script steps. This is because line returns are preserved in a field as part of parameter passing. There are ways around this drawback by capturing those return characters and converting them to another less common character. It's also worth noting that the order of script parameters must match the variables that you set in the ensuing script steps. This can lead to the wrong data being set into the wrong field, in turn, leading to difficult to troubleshoot bugs. What other options do we have that may help prevent these shortcomings? Multiple Parameters with Pipe Delimited Lists By not using the return character, we can use other less common characters, like the Pipe character (|), to delimit our script parameters. In this coding example, we replace the ¶ character with the | character as seen below: MNF__Manufacturer::Name & "|" & MNF__Manufacturer::Headquarters & "|" & MNF__Manufacturer::NetWorth & "|" & MNF__Manufacturer::DateFounded With this method, we streamline the passing of parameters, but we need to use a few more functions on the variable declaration side to properly parse out the various values contained in our script parameter. It is fairly straightforward to get the first and last value from the script parameters using the Left() and Right() functions. These functions allow us to use the position of a specific pipe character to determine when we should start and end the character parsing. Here is an example of returning the first value in the script parameters: Left( $parameters; Position ( $parameters ; "|" ; 1 ; 1 ) -1 ) /*Subtract one because we don't want the pipe itself to be part of the name*/ As we can see, this technique is more advanced and would require an understanding of the Left (), Right(), Middle() and Position() functions to retrieve multiple parameters. However, every developer should learn to utilize these powerful functions within FileMaker,within their custom applications. We have other methods to pass multiple script parameters, many which are more elegant than using the Pipe delimiter. Multiple Parameters with Let Function The Let() function is a native FileMaker function that often is a mystery box for new developers. Using the Let() function, you can not only make your complex calculations more readable and modifiable, but can also declare local and global variables inside it. Therefore, we can use the Let() function to pass parameter values AND also the variable names! This is a super powerful function indeed! See the below example of passing multiple script parameters using the Let() function: "Let( [ $name = MNF__Manufacturer::Name; $headquarters = MNF__Manufacturer::Headquarters ; $netWorth = MNF__Manufacturer::NetWorth; $dateFounded = MNF__Manufacturer::DateFounded ]; $name )" In the above code, we are passing our Let() function inside quotes to preserve the coding until we want to evaluate the parameters inside it. The first part of the Let() function is contained with square brackets "[]" and is where we can declare the local variables for each value. Each line inside this declaration is separated by a semicolon. Think of the semicolon as the end of a sentence in a declaration. When this Let() function is passed with quotes, we are able to wait to declare our variables when we are inside our called script. We can have FileMaker analyze this Let() function be using the Evaluate() function. This simply takes the parameter passed to the function and evaluates the text inside it. We can pass simple mathematical equations or other functions, like we see in the example in Figure 3: Figure 3 - Pass parameters with the Let() function With one script step, we are able to create multiple local variables for our script! See how the Let() function is super powerful? Use it! There is a drawback about this method of script parameter passing. The parameter that you pass is pretty verbose, and the potential for typos is higher. Typing the wrong variable name in your Let() function can have unexpected consequences. Make sure to test your script with various parameters in this method. Multiple Parameters with Custom Function An alternative to the Let() function includes using a pair of custom functions to declare and then set your script parameters to local variables. This allows predefining your script parameters as variables but with a simpler syntax and fewer script steps by using the "#Assign()" custom function and its related custom function "#". Visit to learn about these custom functions in detail. Here is an example of how to pass script parameters using the "#" custom function: # ( "name" ; MNF__Manufacturer::Name) & # ( "headquarters" ; MNF__Manufacturer::Headquarters ) & # ( "netWorth" ; MNF__Manufacturer::NetWorth ) & # ( "dateFounded" ; MNF__Manufacturer::DateFounded ) This custom function uses "name-value" pairing. The value on the left side becomes your variable name in the ensuing script; the value on the right becomes your variable value in the script. Once we pass our parameters in this format, we simply call the "#Assign()" custom function with the Get(ScriptParameter) function, as shown in Figure 4: Figure 4 - Pass parameters with a name value pairs function The result of this is multiple variables based off the "name-value" pairing that we defined in our script parameter.
This method does require importing custom functions into your solution. It also requires careful entry of name values to ensure variables are used correctly in your script. However, overall, this method provides an elegant and speedy way of passing script parameters and setting your script local variables Multiple Parameters with JSON FileMaker introduced JSON functions natively in FileMaker Pro with version 16, but custom JSON functions have also been around for some time. JSON stands for JavaScript Object Notation. You can store data in a similar name-value pair, as described in our custom function option above. With JSON, we can use native FileMaker functions to pass multiple script parameters using the JSONSetElement() function. The added power with JSON is nesting related data inside a JSON object. Think of this as the ability to send not only data from the current record you are viewing in FileMaker, but also related data, such as order line items or customer contact records. This allows for larger sets of data transportation in the well know JSON data format. See this example of multiple data sets in a single script parameter declaration: JSONSetElement ( "" ; ["name" ; MNF__Manufacturer::Name; JSONString]; ["headquarters" ; MNF__Manufacturer::Headquarters; JSONString]; ["netWorth" ; MNF__Manufacturer::NetWorth; JSONNumber]; ["dateFounded" ; MNF__Manufacturer::DateFounded; JSONString]; ["relatedModels" ; JSONSetElement ( "" ; ["model[0].name" ; GetNthRecord ( MNF_MOD__Model::Name ; 1 ) ; JSONString]; ["model[1].name" ; GetNthRecord ( MNF_MOD__Model::Name ; 2 ) ; JSONString]; ["model[2].name" ; GetNthRecord ( MNF_MOD__Model::Name ; 3 ) ; JSONString]; ["model[3].name" ; GetNthRecord ( MNF_MOD__Model::Name ; 4 ) ; JSONString]; ["model[4].name" ; GetNthRecord ( MNF_MOD__Model::Name ; 5 ) ; JSONString]; ["model[0].body" ; GetNthRecord ( MNF_MOD__Model::BodyStyle ; 1 ) ; JSONString]; ["model[1].body" ; GetNthRecord ( MNF_MOD__Model::BodyStyle ; 2 ) ; JSONString]; ["model[2].body" ; GetNthRecord ( MNF_MOD__Model::BodyStyle ; 3 ) ; JSONString]; ["model[3].body" ; GetNthRecord ( MNF_MOD__Model::BodyStyle ; 4 ) ; JSONString]; ["model[4].body" ; GetNthRecord ( MNF_MOD__Model::BodyStyle ; 5 ) ; JSONString]; ["model[0].year" ; GetNthRecord ( MNF_MOD__Model::Year ; 1 ) ; JSONString]; ["model[1].year" ; GetNthRecord ( MNF_MOD__Model::Year ; 2 ) ; JSONString]; ["model[2].year" ; GetNthRecord ( MNF_MOD__Model::Year ; 3 ) ; JSONString]; ["model[3].year" ; GetNthRecord ( MNF_MOD__Model::Year ; 4 ) ; JSONString]; ["model[4].year" ; GetNthRecord ( MNF_MOD__Model::Year ; 5 ) ; JSONString] ); "" ] ) To set each script parameter as a variable, we can also use a custom function that works similar to #Assign () function but for the JSON format. Figure 5 showns an example using the JSONCreateVarsFromKeys () custom function: Figure 5 - Pass parameters using JSON If we don't want to use a custom function, we can use the JSONGetElement() function individually to grab each name-value pair. The custom function route takes very few lines of code, but the native function provides individual script steps that may aid in debugging. Another aspect to account for with using the native JSON functions is not all data types between JSON and FileMaker match up perfectly. For example, the date format that FileMaker uses is not directly supported in JSON, and we have to pass date values as a string. Here is a useful chart to see how each data type corresponds between the two formats: FileMaker Pro JSON Match Text String Number Number* Date String or Number Time String or Number Timestamp String or Number Container String Object Array Null Passing Containers Up to this point, we have described how to send parameters in some sort of text format, whether the data type is date, time, string or number. But what about binary data stored inside a FileMaker native container field? There are a few ways we can send container fields as a script parameter, and I will describe the drawbacks to each. The first method to transport container data via a script parameter is to use the native FileMaker Base64Encode() and Base64Decode() functions. The Base64Encode() function takes container data and converts it into a large block of text to make it easier to transport for script parameters or other data destinations. To reverse the Base64 encoding, use the Base64Decode() function to store back in it's native FileMaker data format. See this example of passing as Base64 encoded text: Base64Encode ( MNF__Manufacturer::Logo ) To reverse the process, we use the base64 decode function as shown in Figure 6: Figure 6 - Use the base64 decode function Unfortunately, some file metadata is lost in translation when Base64 encoding your container data, causing loss of important information in the encoding/decoding process. For example, a JPEG image going through this process loses information related to creation, modification, latitude, longitude, among other metadata. In some development situations, this is not an acceptable result. The alternative is to temporarily move the container field into a global container field. From there, set your destination field to the value stored in the global container. See an example of this shown in Figure 7: Figure 7 - Set the destination container to the global Conclusion As we can see, what started as a simple passing of one parameter to a script can be quite complex and varied. This should not dissuade one from using script parameters. Their benefits are numerous and can take your FileMaker development to the next level. By using script parameters, you can increase their development productivity and make it easier to streamline various functions within your database with less code. I hope that laying out the above options provides an informative matrix to navigate the possibilities that work best for your development practice and lead you to learn more about native and custom functions in FileMaker Pro. Have fun passing your parameters! Resource Makah Encarnacao's DevCon Presentation - FileMaker Community Questions? Leave a comment below or if you need help with your FileMaker solution, please contact our team. The post Passing Script Parameters: A Complete Summary from Simple to Complex appeared first on Soliant Consulting.
Voir le billet d'origine

Portal Column Techniques: Sort, Filter, and Batch Update

FileMaker Portal Columns I was recently asked to sort portal columns for a client, and I figured there has to be a newer and cooler technique out there to accomplish portal sort than when I did it last. I reached out to the other FileMaker developers at Soliant, and I got a great sample file from Ross Johnson. He shared a really cool technique with me, crediting mr_vodka (sounds like a fun guy!). Read mr_vodka’s original post here. For my client, I was also asked to filter the portal and batch update columns. The end product came out pretty cool, so I decided to create a sample file with all these techniques put together in one file to share with the FileMaker community. The data in my sample file is from Mislav Kos' post: Test Data Generator Figure 1 - Portal with filters and batch update columns Expand image Get the Demo File Download the demo file to follow along with the instructions outlined below. Sort Here’s the step to complete the portal sort. You’ll need to use the sample file to copy and paste some components.
Copy the field “zz_portal_sort_c” into your solution. You’ll need to copy it into the table on which your portal is based on. Open the field definition for zz_portal_sort_c. The only update you’ll need to make to this calculation is set the let variable “id” to the primary key of your table. For example, if your primary key is something like “_kp__ContactID” you’ll need to have “id = ¶ & _kp__ContactID & ¶ ;” (see Figure 2) Figure 2 – Set the let variable ID Expand image NOTE: Be sure this calculation returns a number (see Figure 3), that’s very important!
Figure 3 – Select “Number” for the calculation result Next, copy script “sortRecords ( field { ; object } )” into your app. You’ll need to update line 51 and change “ID” in the executeSQL statement to use your primary key field for the table on which your portal is based (see Figure 4) Figure 4 - Update Line 51 in the script Expand image You should also update line 6 (the header information) to reflect that you added this script to the file, and the date you did. Back in your layout, update your portal to sort by the new field you just added to the table. Figure 5 - Update your portal sort Expand image Name the portal object “portal”. If you prefer a different object name, it can be anything you’d like, but you’ll need to update almost all the scripts for this demo to reflect the new portal object name. Figure 6 – Name the portal object You can now assign the script to your labels with a parameter of: “List ( GetFieldName ( <table>::<field> ) ; "portal" )”. I also added some conditional formatting to the label to turn the label bold when the column is sorted. Additionally, as a visual cue that the sort is bidirectional, I added an up and down arrow for each column and assigned a conditional hide to it. You can copy and paste the buttons to use in your solution, and then update the hide calculation to use your fields. And that’s it! Once it’s all set up, sorting for each column should work. One thing I want to note: this method assumes that the relationship is one-to-many. I tried it using a Cartesean join, and it broke the sort. I haven’t tried anything more complicated than a one to many. Filter Columns Filtering each column allows the user to do an “AND” search in the portal, which means that your user can filter on multiple criteria. If you used a single search bar to filter, then it is considered an “OR” search. To be honest, I haven’t researched if there’s a better technique out there. This method made logical sense to me when I wrote it, and lucky for me it worked. If you know of a better approach to use, I’d love to hear it; please leave a comment below. Here are the steps to complete this filter technique: Create global fields for every column you’d like to filter. Figure 7 - Create global fields Place those fields on the layout Figure 8 - Place the fields on the layout Expand image Add the script “Trigg_CommitRefresh” to your script workspace and then assign that script as a trigger to the filter fields with a trigger type of OnObjectExit. This script trigger will only commit the record and refresh the portal every time a user exits a filter field. In this case, gender is a little different; it uses an OnObjectModify. You’ll learn why gender is different a little further down in this post. Now we update filter calculation for the portal. You can copy the code from the filter calculation into your portal calculation and then update it in your file to match your fields. The filter calculation is a single let statement that has four parts: Define “AllClear”, which is a flag that checks if all the globals are empty Define which filters have any text in them. In other words, which filters are being enacted by the user Define each filter result at the record level. If the user entered text to filter, does the current record pass that filter, and therefore returns 1, or that record getting filtered out and returns null? Finally, we compare our results. If AllClear is true, then always show the record (return 1). Otherwise, let’s count up how many filters the user is trying to complete, and count up how many columns pass the filter for the given record. If these two sums match, then the record passes the filter check. If not, then the current record has been filtered out. You’ll need to update the following for this calculation to work in your file: The globals you’d like to filter within the “All Clear” definition The filter check section: Filter<FieldName> = not IsEmpty (<Table>::<FilterGlobal> ) The filter result section: Filter<FieldName>_R = If( Filter<FieldName>; PatternCount ((<Table>:: <FieldName>; <Table>:: <FilterGlobal>)> 0) NOTE: You’ll notice the gender result is a little different, see item V. below which explains why. The results comparison will need to be updated: If( AllClear; 1; Filter<FieldName1> + Filter<FieldName2>….. =  Filter<FieldName1_R > + Filter<FieldName2_R>….. ) Gender Difference: For most of these filters, I’m using the patterncount() function because I want to include partial matches. However, with gender, if I searched for “male,” I would always get male and female results since the string “male” is inside the string “female.” Since in this case there are only two options, I turned the filter into a radio button so that I don’t have to worry about partial word entries and now I can make a complete word comparison in the calculation. That’s why gender does not use patterncount() and instead uses “=” to see if the filter and the value are identical. Batch Update The batch update feature goes hand in hand with filtering – the user will filter the data and then perform a batch update. When completing this feature, I figured there are two ways to accomplish it: go to related records in a new window and perform a replace field contents, or loop through the portal itself. I decided to loop through the portal because I liked that you don’t have to leave the current window. However, both methods would accomplish the same goal, and if you have a lot of related records to be updated, the replace field contents might be a little faster. But for a typical use case, looping through the portal works well. To complete the batch column update, you’ll need to copy over the script “BatchUpdate (field)” into your file. If you haven’t already, you’ll need to name your portal object “portal” for this script to work. You should also update the history in the script header to communicate that you added this script to your file and when you added it. I recommend duplicating line 3 and then adding your name/email, the current date, and a note about how you copied this script into the file. The rest of the script is ready for use. If you’d like, you can customize the dialog in step 9. Now you’ll need to add the batch buttons to the layout. Your button will call the BatchUpdate script you just copied over and will pass the parameter of the field you’d like to update, in quotes. That’s the summary of how these three features are set up in the sample file. I hope you find it useful.
Figure 9 – Button setup Questions? Leave a comment below or if you need help with your FileMaker solution, please contact our team. The post Portal Column Techniques: Sort, Filter, and Batch Update appeared first on Soliant Consulting.
Voir le billet d'origine

FileMaker Portal Columns: Sort, Filter, and Batch Update Demo

Take Your Portals to the Next Level FileMaker portals are invaluable for showing related data; there are several techniques for enhancing their functionality. Our demo contains three methods for adding the ability to sort, filter, and batch update portals. Follow along with our step-by-step guide and get started with expanding the functionality of the portals in your FileMaker solution.
Complete the form to receive the demo file: Trouble with this form? Click here. The post FileMaker Portal Columns: Sort, Filter, and Batch Update Demo appeared first on Soliant Consulting.
Voir le billet d'origine

Document Field IDs: Compare Field IDs Between Deployment Environments

Gemini to Apollo Just like trailblazers such as Gus Grissom, working on the Mercury, and later, Gemini projects for NASA, we sometimes require separate or parallel environments in project deployments. Gemini could be viewed as the “Dev” phase that led up to the “Prod” project that followed, Apollo. Development environments are a critical part of projects where experiments and testing can be done without risk to critical resources, allowing bugs to be found and worked out. Similarly, in FileMaker development, it is not uncommon to have different environments dedicated for development and production. Active development occurs on a dedicated server, or offline, where changes are tested before being put into production.
Astronauts John Young and Virgil I. (Gus) Grissom are pictured during water egress training in a large indoor pool at Ellington Air Force Base, Texas. SOURCE: NASA Images at the Internet Archive Expand image There are strategies to consider when working in such an environment to ensure successfully implementing updates without issue. In even larger systems there could be more environs for different purposes, such as development (DEV), quality assurance (QA), and production (PROD). These can occur in sequence or, more commonly, in parallel. Parallel Development Environments During Apollo 13, as with all NASA missions, there was a primary crew and a backup crew. When the accident with the oxygen tanks occurred, Ken Mattingly and a team of engineers were required to come up with a solution on the ground. Essentially, this was a Dev environment, taken to extremes, with the Prod environment being the Apollo crew in actual flight. Deke Slayton shows the adapter devised to make use of square Command Module lithium hydroxide canisters to remove excess carbon dioxide from the Apollo 13 LM cabin. SOURCE: NASA Apollo Imagery Similarly, you can have different environments for your FileMaker project, albeit without the extreme conditions or consequences. Dev environments are sometimes required to work out and test various solutions before eventually being put into production with confidence. The Problem When you create a field in FileMaker, there are things that happen under the hood to make it easy to reference throughout the solution. An internal "Field ID" is assigned to each field that you create. You never see this internal ID, but this is how FileMaker knows what field to reference in layouts and scripts. You might think that fields with the same name would map correctly across different systems, but it is the internal field ID that is used. Therefore, it is critical when deploying any updates, that internal field IDs match. For example, there can even be multiple developers working on a solution, and if different people are adding fields in parallel environments without consideration of the order they are added in, then the internal field IDs are bound to get out of place. The next time you deploy an update, things can break if fields are not lined up. Environment 1 | Developer A | Table 1 Internal Field ID Field Name 1001 ID 1002 FirstName 1004 LastName Developer A created a field, “MiddleName”, which was assigned internal ID 1003. Later in development, the field was deleted leaving a gap in the internal ID numbering sequence.
Environment 2 | Developer B | Table 2 Internal Field ID Field Name 1001 ID 1002 FirstName 1003 LastName Developer B never created the “MiddleName” field so internal ID 1003 exists in this environment but is associated to the “LastName” field.
Internal Field IDs There are some internal tables that FileMaker uses to track schema, which is not visible anywhere in a file, except by way of the executeSQL function. By querying these "under the hood" internal tables, we can get information about the defined FileMaker fields, including their internal IDs. The Solution: Pre-Flight Check Astronaut Roger B. Chaffee is shown at console in the Mission Control Center, Houston, Texas during the Gemini-Titan 3 flight. SOURCE: NASA Gemini SHuttle Mission Gallery Expand image I built a small FileMaker file to automate the process of getting a list of all Tables and their Fields and internal Field IDs and some scripting to show where there are conflicts. You can get the file free from here: All you need to do to use with your files is to follow the instructions, update the external data sources and reference the one script you copy from this file and paste into the files you want to be able to compare. What happens then is that the necessary executeSQL function is run and returns results to the parent script as a parameter, where we parse through and build results as temporary records in our file. Special thanks to our own Brian Engert for helping optimize the SQL to only return fields for each base table, instead of each table occurrence in a file.
Now What? Once you find an issue where fields do not match, what do you do? That is going to depend on what has been done, and where those fields appear in your development. This tool merely identifies where issues are at, what you do about it is something different. Ideally, you identify problems before they become an issue and can correct fields in both environments before they become an issue. You might also consider implementing the FileMaker Data Migration Tool for options regarding matching on field name during migration, to get environments back to a baseline file state. Read more about the Data Migration Tool. Also, thank you to Norm Bartlett for reviewing and contributing to this post. References Gihub Repository: Gemini 3: Apollo 13: If you have any questions or need help with your FileMaker solution, please contact our team. The post Document Field IDs: Compare Field IDs Between Deployment Environments appeared first on Soliant Consulting.
Voir le billet d'origine

Soliant Consulting

Soliant Consulting


DevCon 2018 Follow-up: The Story of Billy Bass – Part Five

This is the final post in my series about the demo I presented during my “IoT and the FileMaker Data API” session. Using Visemes The 'master' branch is the more interesting one as it uses the viseme information that Amazon Polly provides. For this to work I needed some linguistics help to interpret what those visemes mean. In my DevCon session I jokingly stated that the session happened because of the help of both my daughters, one is in university studying Biology (fish, get it?) and the other is in university studying Linguistics. So I was all set Thanks, Girls! Back to those visemes, Figure 28 shows what Amazon sends you: Figure 28 - Viseme information It tells you the mouth position at any time as it changes during the audio. And we can determine the duration from the time elapsed between two entries. But which of those weird-looking viseme characters indicate whether the mouth is open or closed? Obviously, the plastic fish doesn’t have the full mouth range as we have, it can’t purse its lips for instance. So, I just had to find the ones that mean that the mouth is open, half open, or closed. Figure 29 shows the visemes in master branch code.
Figure 29 – Viseme characters in the master branch code Expand image Query the FileMaker Data API As we query the FileMaker Data API and we retrieve the viseme data we received from Amazon Polly, we process it and build a smaller list of those events that have to do with the vowels that open the mouth:
Figure 30 – Query the FileMaker Data API Expand image Later in the code when it is time to play the audio and move the head and mouth we spawn a separate thread (lines 267 and 273) to process the data we retained (see Figure 31). Line 267 invokes this function that loops through that viseme data array and move the mouth at the appropriate time.
Figure 31 – A separate thread is spawned to process the retained data Expand image In Figure 32, Line 78 opens the mouth for a pre-determined length of time (set in billy.ini) and line 83 puts the subroutine in waiting mode until it is time for the next line in the viseme data, based on the time stamps in the Polly data.
Figure 32 – Code Expand image You will note that line 272 in Figure 31 introduces a delay between starting the voice (the audio playback) and starting the thread for the mouth, that's a value set in billy.ini and set to what I found to be necessary to make sure the mouth action did not start earlier than the audio playback. And with that I was pretty much done. Everything was working as I wanted. The new pinion gears arrived, and I replaced the old gears on all three of my Billy Basses (the original one and my two Billy Bass Bones), and now the heads turned the proper 90 degrees, making Billy look straight at you as it begins talking. Mission Accomplished! My last worry was whether or not I would be able to access my virtual instance of Windows 10 where my Visual Studio 2017 lives (it’s on a VMware server here in the office, behind a VPN) given how flaky hotel networks can be. It turns out that the excellent (and free) Visual Studio Code editor on macOS also supports Python as shown in Figure 33.
Figure 33 – Visual Studio Code editor Expand image All I had to do was clone my GitHub repository to my Mac, and away I went. From my Mac, I can SCP (Secure Copy Protocol) to copy the modified Python scripts over to the Raspberry Pi and use SSH in Terminal to run the main Python script to put the Raspberry Pi in its wait-and-read loop. Figure 34 shows the Python script running on the Raspberry Pi, logging every step of its actions. Figure 34 - Python script running on the Raspberry Pi And here is a final picture of my hotel room with the whole thing set up for last-minute testing: Figure 35 - Final testing Time for a little fun then. Early on I mentioned that the text we are sending to Amazon Polly uses SSML markup. Using SSML allows us to specify things like accents. Or tell Polly to speed up the text narration, like so: <speak>
Hi, my name is Andrew Lecates, when I get going, <prosody rate="x-fast">you really have to fasten your seatbelts, because there is a lot to cover and you are standing between me and my coffee, or red bull, or whatever the heck it is that makes me talk like this.</prosody><amazon:breath duration="x-long" volume="x-loud"/><break time="500ms"/>b</speak> FileMaker Go Figure 36 – Demo file on iPhone or iPad Expand image If you open the demo file on your iPhone or iPad, you’ll go straight to a device-specific layout where you can flip between the records and use the big buttons to flag the record you want Billy to speak. You can also use the “Head” button to make Billy only turn his head without saying anything.
Conclusion I had a lot of fun putting this particular DevCon presentation together, and there is something immensely satisfying to seeing devices collect sensor data and being able to things move from a FileMaker solution. If nothing else I hope I have demonstrated that the FileMaker platform fits in very well in this IoT environment. Writing the code to run on the Raspberry Pi was fairly straightforward, in both C# and Python. None of the demos that I used in my session are more than 300 lines of code, with plenty of whitespace, comments and logging. Don’t let unfamiliarity with those languages deter you. It's easier than you think. Questions about anything in this series; leave a comment here or find me on
The Story of Billy Bass - Part One (setting it up) The Story of Billy Bass - Part Two (making Billy Bass move) The Story of Billy Bass - Part Three (using Raspberry Pi) The Story of Billy Bass - Part Four (switching to Python) The Story of Billy Bass - Part Five (using Visemes) The post DevCon 2018 Follow-up: The Story of Billy Bass – Part Five appeared first on Soliant Consulting.
Voir le billet d'origine

Soliant Consulting

Soliant Consulting


DevCon 2018 Follow-up: The Story of Billy Bass – Part Four

This is the fourth in a series of posts about the demo I presented during my “IoT and the FileMaker Data API” session. Making it Work in Python The first half of my DevCon presentation would be about collecting sensor data from a Raspberry Pi and sending that data to FMS through the Data API, and I am using Windows 10 IoT and .NET code for that. If I could do the second half of the presentation using Python, then it would show off the versatility of both the Raspberry Pi device and the FileMaker Data API. Raspberry Pis don’t have hard drives; they use Micro SD cards to run from so switching operating systems is as easy as switching SD cards. I got one with the latest Raspbian installed and then followed the Adafruit instructions to get their Python library installed and run the tests. The tests worked just fine, so I could move on to figuring out how to write the code in Python. Finding the Right IDE I was trying to figure out what IDE (Integrated Development Environment) I would need to write Python code but turns out that Visual Studio 2017 does that just fine, so I didn't have to learn a new tool. The fully working source code for what I ended up doing in my Devcon session is here on GitHub: Figure 22 is from a Windows virtual instance, running Visual Studio 2017 and shows what files there are in my source code: billy.ini is the config file (my FileMaker Server address, name of the file, layout and login credentials, and so on) is the working version of the whole thing: query the FileMaker Data API to see if we need to play audio and the code to make the mouth and head turn Before I got to writing the whole thing, I needed to figure out what the correct settings were to move the fish’s mouth and head. For that I started with two test files: and Figure 22 – Python code Expand image The purpose of those test files was to figure out what the ideal settings were for the motor to deliver sufficient power for just long enough to make the head and mouth move, but not so much power that I would break the mechanism. If you want to build your own Billy Bass setup, then I strongly suggest using these two files to find those ideal settings for your fish. I was working with two Billy Bass Bones since I wanted redundancy in case I broke one and found that they each required subtle differences in those values for maximum effect. Since time was running out, I ended up storing these motor values in the billy.ini config file (see Figure 23) instead of storing them in the FileMaker file and reading them from there. It would not be a big change to add that since I had it working in .NET (see the MOTOR and MOTORHAT layouts in the FileMaker demo file).
Figure 23 – Motor values stored in the billy.ini config file. Expand image Re-coding in Python I already had most of the code developed in C# so the logic was clear to me. Figure 24 shows the workflow that I needed to re-code in Python. The Raspberry Pi would query FileMaker Server twice every second to see if any records were flagged there. If there was such a record, it would download the mp3 and play it back while moving the mouth and the head. The first order of business was to find a good Python wrapper around the FileMaker Data API.  I settled on David Hamann’s fmrest, on GitHub. As I had never written Python before, David was very gracious in answering my questions to keep me going. One of the challenges was finding out how to do multiprocessing in Python because moving the head and mouth had to happen at the same time as playing the audio. That took a bit of Googling to get right. Fortunately, there is a ton of useful info around. Moving the head was easy enough, once I have the audio I can figure out how long it is and then send a signal to the head motor to move in the right direction for that length of time. That happens in this block of code as shown in Figure 25.
Figure 24 – Workflow for re-coding in Python Expand image Figure 25 - Code block Expand image But there was the obvious big challenge: how do I make the mouth move in an approximation of what the audio was saying? When you check the source code on GitHub, you’ll notice that there are three branches (see Figure 26). “Pacat” can be ignored, that was an attempt to have the Raspberry Pi ‘listen back’ to the audio as it was playing to try and determine when to open and close the mouth based on volume or something else. I abandoned that branch as I was running out of time. The “master” branch is where I am using the visemes as supplied by Amazon’s Polly to move the mouth and “AudioSample” is using the mp3 amplitude to determine when to mouth the mouth. This approach does not work with the visemes but just inspects the audio file itself.
Figure 26 – Three branches in the source code Expand image Using Amplitude "AudioSample" in essence uses the block of code shown below to chunk up the audio into segments of 200 millisecond and for each segment it determines the amplitude of that chunk (see Figure 27). If the amplitude is higher than the threshold it runs the motor to open the mouth for the preset duration and then closes it again. Figure 27 - Code block for "AudioSample" Expand image The variables involved here to make this look natural are: length of the audio segment amplitude threshold 'fish_mouth_duration': how long do you keep the mouth open? 'time_allow_for_spent' to account for the time it takes to mechanically open and close the mouth With some careful testing, you can get them just right depending on your fish. The code depends on the pydub library for the audio-inspection. The Story of Billy Bass - Part One (setting it up) The Story of Billy Bass - Part Two (making Billy Bass move) The Story of Billy Bass - Part Three (using Raspberry Pi) In my final post, I'll show how I used Visemes. The post DevCon 2018 Follow-up: The Story of Billy Bass – Part Four appeared first on Soliant Consulting.
Voir le billet d'origine

Soliant Consulting

Soliant Consulting


DevCon 2018 Follow-up: The Story of Billy Bass – Part Three

This is the third in a series of posts about the demo I presented during my “IoT and the FileMaker Data API” session. Construct #2 - Billy Bass Bones As mentioned in my previous post, I decided to do everything from the Raspberry Pi to control the Billy Bass toy. While I was scouring eBay looking for pristine Billy Basses I came across this one as shown in Figure 15. Figure 15 - Ordering a Billy Bass I thought that would be funny and I could make the story about the fact that almost 20 years have passed since Billy Bass’ first DevCon and that he… well… became bones. Unfortunately, when I got this fish, it too did not turn its head the full 90 degrees that I wanted. Nevertheless, I ‘gutted’ the fish in the same way as the original one by removing its controller unit and only leaving the wires that come down from the motors and the speakers (see Figure 16). This type of Billy Bass has two motors: one for the mouth and one that drives the head or the tail depending on whether you run the motor forward or backward.
Figure 16 – Inside Billy Bass with the controller unit removed Issue with the Pinion Gear Since I was seriously disappointed about both Billy's not turning their head properly, I did some more searching and found that it was a bit of a known issue: the pinion gear that sits on the shaft of the motor that moves the body can crack. In fact, it almost certainly will break given enough usage and time. And since all of these toys are second-hand, it is pretty much a given that they will have been used enough times that those gears will have cracked and that the head will no longer turn all the way. There are some informative videos on YouTube that show how to replace it so after a lot of trepidation I decided to go ahead and disassemble the fish further to get to the body motor. Figure 17 - Pinion gear on the shaft of the motor can crack You cannot tell from the pictures in Figure 17, but there was indeed a crack between two of the teeth and because of it, the gear is not tight enough on the shaft. When the motor turns, it does not deliver all of its power through the gear, and the gear slips. And that is why the head does not turn all the way. Not all Billy Basses, however, use the same types of gear. While they all have 2mm shafts, some gears have nine teeth and some 10 or 11. So you need to get it out first before you know which ones to get. Mine were all 9-teeth gear, so I got a bunch from Amazon.They are very cheap, but the delivery time was making me a bit nervous. And at this point, I wasn't entirely certain that I could re-assemble the whole fish While I was waiting for the replacement gears to arrive, I set about finding out how to drive them from the Raspberry Pi. I settled on the MotorHat from Adafruit. Adafruit makes a bunch of quality hardware for Raspberry Pis and Arduinos, the documentation is excellent, and they have a good support forum. They also have a .NET library for their hardware including this motor hat. The MotorHat has four motor connections, more than enough to drive the two Billy Bass motors. “Hats” or “Shields” are an integral part of the Raspberry ecosphere and that of similar boards like the Arduino and Particle. Hats add hardware functionality very easily. I have a few slides in my DevCon presentation that show the concept: To my dismay, however, the motor hat had to be soldered together. I looked for pre-assembled ones before I became resigned to the fact that, yes: I would have to solder. So it was back to Amazon, and I got a soldering iron set. But it stopped working after just one use, so I bit the bullet and got a decent one. It turns out that soldering is a lot of fun. Who knew. It helps that the Adafruit instructions are complete and easy to follow. By this time my dining table started to look quite full (Figure 18).
Figure 18 – My “workshop” With just a few weeks left before DevCon... Of course, none of this would have happened without the support of my loving wife Nicky — a fellow FileMaker developer and great artist — whom I robbed of that dining room table for a few months, and whose patience I know I’ve tested on more than one occasion. To keep the soldering to a minimum and to allow for quick assembly and disassembly I was still using a breadboard and jumper cables to connect the Raspberry Pi and Billy Bass. Figure 19 shows the Raspberry Pi with the MotorHat on and the cables running to the breadboard. That takes care of the hardware to control the motors. Now I needed to write code to send instructions from the Raspberry Pi, through the Motor Hat to the Billy Bass. I started with the MotorHat demo for C# that they have on GitHub and the underlying Adafruit C# library:
Figure 19 – Raspberry Pi and MotorHat Looking at their demo I could see that there are a few settings that need to be specified to drive a motor correctly: the Pulse Width Modulation frequency (PWM), the direction of the motor (forward or backward), the speed, and the duration. I broke those out in my FileMaker demo file so that I can could just modify the settings right there in my FileMaker record and have the Raspberry Pi read from FileMaker Server and send the instructions to the motors. Figure 20 - Modifying the motor settings in my FileMaker demo file Having learned that those pinion gears on the fish were a little fragile and since I did not have the replacements yet I did not want to experiment with the actual Billy Bass. It was very likely that I'd mess things up at some point and drive the motors too hard and too long. So I just hooked up a DC motor to the hat so that I could tell if it was running correctly, based on my code. The source code is here on GitHub: In Figure 21, it shows the hardware setup, since I am using jumper cables and a breadboard I can easily switch the motor from port 1 to port 2 and change its polarity so that ‘forward’ in my code corresponded to the motor running in the right direction. Note that you will need a good power adapter to run the MotorHat. I tried to skimp on that but found that the hat did not respond very well so I ended up getting the power adapter directly from Adafruit. Everything was working well except that I was getting an error every second time I ran the code. Trying to troubleshoot it I came across an open issue in the Adafruit .NET library and the suggested fixes did not work for me. It looked like I was stuck, and we were into June now. I waited a couple of days to see if there was any feedback on the issue, but none came. Figure 21 – Hardware setup Since the default OS for a Raspberry Pi is Raspbian Linux, and most of the example code you'll find is Python, and Adafruit has a Python library for their MotorHat, I decided to switch gears and see if I could make it all work in Python. I've never touched Python before, so I was feeling more than a little bit of anxious. But I was also excited by the idea.
DevCon 2018 Follow-up: The Story of Billy Bass - Part One DevCon 2018 Follow-up: The Story of Billy Bass - Part Two In my next post, I'll discuss how I made it work in Python. The post DevCon 2018 Follow-up: The Story of Billy Bass – Part Three appeared first on Soliant Consulting.
Voir le billet d'origine

Soliant Consulting

Soliant Consulting


DevCon 2018 Follow-up: The Story of Billy Bass – Part Two

This is the second in a series of posts about the demo I presented during my “IoT and the FileMaker Data API” session. Construct #1 - the original Billy Bass I wanted to find a Billy Bass that was as close to the original one used in the 2001 DevCon session, so I started digging around on eBay, and this is the one I ended up buying: Figure 9 - Buying an original Billy Bass When it arrived, I unscrewed the back to see how I could get at the motors (see Figure 10). There's a small controller unit in there with wires going to the motors, the speaker, the motion sensor, and the battery compartment. Figure 10 - Controller unit wired to components I cut all the wires as close to the controller unit as I could and then removed everything except the wires that go to the three motors and the wires that come down from the speaker.
This particular Billy has three motors: one for the head, one for the tail, and one for the mouth. Those are the wires that go off to the big white area in the center of the fish. When I was done with the demolition, I had just the three pairs of wires that each led to each of the motors.
Figure 11 – Cut controller wires These motors are fairly basic 6V brushless DC items (as I would find out much later in the process – at this particular time in the process I saw no reason to take them out just to look at them, since I was not confident that I could put it all back together).
Figure 12 – Motor Linking the Raspberry Pi and the Motors Now I had to find something to connect those motors to and make them run: a motor controller board that would bridge the gap between the Raspberry Pi and the motors.
As I was searching for existing Billy Bass projects (there are a few that use Alexa), I came across a small device named the Audio Servo Controller that seemed promising. In talking to Jack - who created the board - he mentioned that he had another board that was better suited for the task: the Squawker Talker. You can see it an action in a demo video on page 2 of this thread. Squawker Talker drives the motors and since it has a built-in 3-Watt amplifier that uses Line-In for its audio input, I could use a simple 3.5mm male-to-male audio cable from the Raspberry Pi to this unit to play the audio and connect the amplifier on the board to the Billy Bass speaker. And it comes with a power supply to drive it all. That looked perfect, so I ordered one the Squawker Talker units ($65) At this point, I was not comfortable with soldering, so I settled on using heat shrink seals to connect the Billy wires to small jumper cables to plug into a breadboard. With the heat shrink seals, I can use a heat gun to ‘melt’ the wires together and avoid the soldering. You can see the result with the heat shrink seals in that picture above that shows the motor, about halfway on the wire. When I had all the wires connected, it looked like Figure 13. The green unit is the Squawker Talker, the white unit is the breadboard where I plug in the cables. Figure 13 - Billy Bass wires connected to the Squaker Talker. How Does it Work? The Squawker Talker controls the motors and the speaker. So where does the Raspberry Pi come into play then? The Raspberry Pi (RPI) is the 'connected device' in this IoT setup. In other words, it has to run on its own, and have a connection to the internet so that it can check whether we have flagged a record in FileMaker to be played by Billy Bass. I was already using Raspberry Pis as IoT devices to collect sensor information and send it to FileMaker Server, and I was using Windows 10 IoT as the operating system for the RPIs, so I decided to use Windows 10 IoT here too. My preferred language for this is C#, and I have an open source project on GitHub that is a .NET wrapper around the FMS Data API (
With that, it was a natural thing for me to turn to C# to write the code to run the Raspberry Pi. You can download the code here:
In a nutshell, the code does this: There is a small config file to store the name of the FileMaker server, the name of the FileMaker file, the layout to use and the FileMaker account and password to access the file; The Raspberry Pi loops every half second and queries the FileMaker Server through the Data API (the interval is a config setting too), it looks for any records that have the “flag_ready” field set to 1 (the checkbox labeled “Speak It” on the BILLY layout); If it finds a record where that checkbox is checked, it downloads the mp3 from the container field; And then it uses the Operating System media player to play the mp3; At this point the Squawker Talker takes over and forwards the audio to the Billy Bass speaker and moves the body and mouth. Figure 14 – Raspberry Pi connected to the Squawker Talker And here is the end result:
Based on the audio input, the mouth will move in sync, and you can adjust the threshold level on the Squawker Talker by adjusting screws on the board. The body and tail move randomly. As you can see in the video, the head does not turn entirely in a 90-degree angle and I was a little unhappy about that. I wanted the head to turn all the way so that Billy Bass would look at you, as it is meant to. I also felt that the whole construct was a little too... easy — it's not driving the motors directly; the Squawker Talker board takes care of that. I decided to have another go at it, and this time without the Squawker Talker board, I would do everything from the Raspberry Pi. And I wanted a fish whose head was working properly. So, I looked around on eBay a little more to see if I can find one that is guaranteed to have a head that turns better and then find a way to interact with the motors.
DevCon 2018: The Billy Bass Story - Part One" In my next post, I'll show how I used the Raspberry Pi to control the Billy Bass head. The post DevCon 2018 Follow-up: The Story of Billy Bass – Part Two appeared first on Soliant Consulting.
Voir le billet d'origine

DevCon 2018 Follow-up: The Story of Billy Bass – Part One

At this year’s DevCon in Dallas, TX, I presented a session on FileMaker and IoT (Internet of Things); in preparing for it I settled on this definition of IoT:
Based on that definition and the highlighted core elements, I split up the DevCon session into two parts: Collecting sensor data and sending it to FileMaker Server through the Data API; and Manipulate the environment by driving motors based on data retrieved from FileMaker Server. The first part was easy, using a couple of Raspberry Pi 3s, a SenseHat on one, and a GrovePi hat on the other and using various sensors to collect data. You can download the FileMaker demo file and find links to the source code here: I had been using those same Raspberry Pis to stress-test the beta versions of FileMaker 16 and 17's Data API, and it has made me a big fan of the Data API: it is fast and robust. But I had to find a good demo that showed that second aspect; to drive those motors. A robot or a car were apparent choices but probably too obvious. I toyed a bit with the idea of a "useless box" but couldn't quite work in a good FileMaker angle. Then I thought of Billy Bass. Back in 2001/2002, Rich Coulombre and his team at The Support Group used the Troi serial plugin to connect Billy Bass to their computer and make it talk: A Gemmy Billy Bass toy has two or three motors to move the body, tail and mouth so it would be perfect, and a nice link to the past, so I went for it. The following lists how it had to look like upon completing the work: a FileMaker file hosted on FileMaker Server, and to keep things in the 'Cloud' realm, specifically a FileMaker Server 17 running on an Amazon EC2 instance; the user should be able to type anything into a text field in that FileMaker file; and Billy Bass should then speak what is typed. The Billy Bass is connected to the internet through a Raspberry Pi and listening for instructions through the FileMaker Data API. Figure 1 - Diagram of the Billy Bass workflow Expand image Off to work then. FileMaker Server and the FileMaker file: text-to-speech Putting the file on a FileMaker Server was easy, I chose my dev server which is part of our infrastructure ( It's a Windows instance on Amazon Web Services (AWS), in the Amazon us-east-1 region which comes down to North Virginia. The server is fully patched with FileMaker Server 17v2. Now for the user typing text into that hosted file. The end result is that we need to get audio for whatever was typed in. We need a text-to-speech service that we can send that text to and that will generate audio for the speech playback. There are several of those services around, but I am most familiar with the Amazon Polly service: If you want to play around with the service, you can test it out here after signing in with your AWS account: To interact with Polly from inside FileMaker we will be using the Polly REST API. I could have used the Raspberry Pi itself to interact with Polly through one the available SDKs but I wanted this demo to be about FileMaker’s capabilities, and FileMaker is extremely good at interacting with REST APIs. So, FileMaker will send the text to Polly and receive the audio file in return.
In the FileMaker file the API settings are stored on the API layout, and that’s where you’d enter your own AWS API key (see Figure 2): The other thing we need to provide is somewhere to enter the text you want Billy Bass to speak. That’s the BILLY layout (see Figure 3).
Figure 2 – Provide your own AWS API key on the API layout Expand image Figure 3 - Enter text for Billy Bass to speak As you enter text, it’s automatically tagged with the and XML-like tags. That's because we will take advantage of Polly's ability to process text that is marked up with SSML (Speech Synthesis Markup Language). You will see why later. Polly itself can work with just pure text as well; you have to indicate your choice in the JSON instructions that you send over to Polly. Besides the actual text that we want to synthesize, we also have to send some other information over to Polly. According to the API documentation, the call should look like this: POST /v1/speech HTTP/1.1 Content-type: application/json Body: { "LanguageCode": "string", "LexiconNames": [ "string" ], "OutputFormat": "string", "SampleRate": "string", "SpeechMarkTypes": [ "string" ], "Text": "string", "TextType": "string", "VoiceId": "string" } That's easy enough to do, FileMaker is excellent at creating JSON, and the "Insert from URL" cURL options will let us specify the POST method and set the Content-Type header. To make sure that Polly will accept the request we are sending to it, we have to include authentication information in the header of the call. That is probably the biggest challenge in making this REST call. The section quoted below comes from the Authentication and Access Control for Amazon Polly page and describes our options for authenticating.
The SDK and CLI tools use the access keys to cryptographically sign your request. If you don’t use AWS tools, you must sign the request yourself. Amazon Polly supports Signature Version 4, a protocol for authenticating inbound API requests. For more information about authenticating requests, see Signature Version 4 Signing Process in the AWS General Reference. Since we are not using any of the AWS SDKs or the AWS CLIs, this means that we need to use that "Signature Version 4" protocol, which comes down to sending a special Authorization header as part of the cURL options. As it so happens, Salvatore Colangelo from Goya (famous for their BaseElements analysis tool and BaseElements plugin) demoed this at the 2017 Devcon and described the process in this blog post:
So, I gratefully borrowed his script and supplemented it with the rest of the data that Polly needs: what language to use, what voice for that language, whether we are sending SSML text or pure text, and whether we want an MP3 file back and at what bitrate. The language and voice are something we want to play with, so I made those choices available on the FileMaker layout (see Figure 4).
Figure 4 – Specify the language and voice Expand image The other settings we leave pretty much to their default because they produce a nice small MP3 file that the Raspberry Pi will be able to download efficiently. To synthesize the text shown above, the request that our script produces is this:
The JSON body of the request is stored in the variable $request_params and contains just the bare minimum that we must send (se.e Figure 5) Figure 5 -$request_params variable Expand image We use that variable as part of the cURL options in the variable named $options, so that together those cURL options that we will feed to the “insert from URL” script step look like (see Figure 6).
Figure 6 – $options variable Expand image Figure 7 shows the script at the step where we call that "insert from URL" script step and point it to the container field that will store Polly's mp3 audio file. Figure 7 - Use the "Insert from URL" script step to point to the container field that will hold the audio file. Expand image When we run the script, Polly sends us the audio file of our text: Figure 8 - Audio file of the text So far so good. That's #1 and #2 done. Time to start on #3: how to make Billy Bass move.
In my next post, I'll show how I made Billy Bass move. The post DevCon 2018 Follow-up: The Story of Billy Bass – Part One appeared first on Soliant Consulting.
Voir le billet d'origine

FileMaker DevCon 2018: Training Day – Part Two

Last week I wrote about my experience assisting with Bob Bowers’ Advanced session. I also interviewed two other trainers about how their sessions went. That was so much fun I decided to interview everyone who gave trainings this year. I managed to track down three more: Mike Beargie of MainSpring, Jeremy Brown of Geist Interactive, and Cris Ippolite of iSolutions. It was a pleasure to speak with them all. Here’s some of what they had to say. Intermediate - FileMaker Shared Hosting Master Class Many of you know Mike Beargie from his consistently helpful presence in the FileMaker Community. He was kind enough to meet with me at lunchtime, even though he had a DevCon session to give immediately afterwards. We spent a little time catching up on the past year, and then he shared his reflections on his half-day training: “This is the first time I’ve done a training day. I spend a lot of time answering people’s questions in the community, so it seemed like the natural next step. I’ve also been speaking at DevCon for a few years now, and I wanted to try something longer than a session. My class was all about FileMaker hosting, how to install FileMaker Server or FileMaker Cloud so that you can share out your files. It was a start-to-finish comprehensive course on how to install the server, how to secure the server – including generating and installing an SSL certificate – how to figure all the settings in the admin console, how to actually connect to the server once it’s up and running, and finally touching on troubleshooting and getting help. My goal was to show people that they can set up their own server. With FileMaker 14 support ending in September, and multi-tenant shared hosting going away, there are going to be a lot of people scrambling to re-host their solutions. The landscape is changing with hosting companies: now they’re offering their IT knowledge as a service to help set up dedicated servers for people, rather than providing a setup where a group of clients save money by all sharing the same server. When we hit the first break, I spent the whole time answering people’s questions. People were really engaged and just stayed all the way through. They wanted to learn as much as they could. At the end of the training, a lady from the UK came up to talk to me. She was the embodiment of a citizen developer, a business owner who is trying to provide more efficient software for their staff. Up to now she hadn’t considered doing this herself. But her FileMaker 14 hosting company basically told her, ‘We can set up a dedicated FileMaker 17 server for you and help you manage it, but your rates are going to go up significantly’. So she was worried that she couldn’t afford it, and the license was costing her money and maybe she wouldn’t even be able to use it. Now she has a lot more confidence. She told me, ‘You know what, we’re setting up our server as soon as we get home. We just got our 17 licensing and we were really scared about doing the AWS part of this. You made it look really easy’ – and I jumped in and said, ‘It IS really easy’ – but the important thing is that now she’s ready to give it a try.” Advanced - JavaScript for FileMaker Developers I’ve known Jeremy Brown for several years now and admire his passion for teaching and willingness to help others. We met in the hallway between DevCon sessions for a quick chat about his half-day training. “My session was JavaScript for FileMaker developers. I wanted to communicate the simple fact that all of us FileMaker developers can learn JavaScript. I tried to give them a clear path in – to demystify it and to show that it’s not too time-consuming or impossible to learn. I covered basic JavaScript concepts in the first 90 minutes and then spent the rest of the session on FileMaker Web Viewer integrations. I was happy that there were 150 people in the session. Shows there’s lots of interest. Even some people from FileMaker, Inc. were there and wrote about it in their blog post. Now I have a SLACK community ( for all the people who signed up during my session so they can continue that conversation in the weeks going forward. One of the participants was a first-time DevCon attendee who has been working on his solution for a long time and is interested in expanding its platform. He sat in the front row and was there the whole time, working hard. At the end of the session he shook my hand and told me that I helped inspire him to continue his study. He was excited to work with the charting library that I had provided and get it fully integrated into his system.” Intermediate – Relationships / Calculations I got to know Cris Ippolite during my time working as the Technical Marketing Evangelist for FileMaker, Inc. I will always be grateful for all the encouragement and support he gave me during that time. And of course he’s a joy to interview — the tricky part for me is to edit down our conversation while preserving his distinctive voice: “I gave a full-day training in two parts. The morning was about intermediate-level relationships and the afternoon was about calculations. Relationships is a topic that I’ve been investigating recently, figuring out why people struggle with it so much. The main thing people can’t seem to wrap their heads around is the relationship graph. The concept of relationships in the abstract makes sense to people, but the different ways you actually use the tool can be challenging. So I’ve been separating the idea of creating true relationships between tables from relationships we use for queries – which is where I see people getting lost. The graph is great for true relationships, but I don’t see the upside of visualizing a query as a bunch of boxes with lines between them. Instead of burdening people with parsing out ‘Is this the same thing as that’, I say separate them so it’s easy to see the difference. People responded to that honest critique, and to learning a way to sort things out. I could tell it was landing with people – you know, when you get the nods and the ah-has as you’re going along. Then, that night, a group of folks I ran into in the bar – I’m assuming they travel together because they were in the class together too – they pulled me aside and all started talking at once, saying, ‘Hey, that was great! Thanks for letting us know that we weren’t the only ones confused by this.’ In the afternoon I talked about calculations. I wanted to impress upon people that it’s not all about calculated fields. Maybe you’re already comfortable creating field formulas like you do in Excel, but there’s so much more – you can use formulas all over the platform, in custom dialogs, replaces, hidden objects, conditional formatting, portal filtering, tooltips, all that stuff. So if you invest in increasing your calculation vocabulary, you can leverage that information in a lot of different ways. At the end of the day, I try not to introduce more boring stuff. You know, people are like, holding their heads in their hands and saying, ‘Make it stop!’ so I always wrap up with something fun. This time, I created a dog-walking app that uses four or five different GetSensor parameters to do things like counting your steps and how far you’ve gone. People really dug it, they were rushing to download it to their phone, and they had a great time playing around with it. What could be more fun than getting people on their feet and putting calculations literally in action?” I had a great time talking to all these folks, hearing how they work and what motivates them as trainers. I hope you got something out of it too! FileMaker DevCon 2018: Training Day - Part One FileMaker DevCon 2018: Day 1 FileMaker DevCon 2018: Day 2 FileMaker DevCon 2018: Day 3 The post FileMaker DevCon 2018: Training Day – Part Two appeared first on Soliant Consulting.
Voir le billet d'origine

FileMaker DevCon 2018: Day 3

Today was the final day of DevCon, and it has an entirely different flavor. It is Customer Day! Rather than technical topics, the majority of today’s sessions show-and-tell client stories of problems solved by utilizing FileMaker. Success Stories of Soliant's Philanthropy Committee The first session I attended began with a presentation by our own Makah Encarnacao. Makah shared her own story of a chance conversation with Chris Manto during DevCon a few years ago. Over a drink at the bar he shared his experiences in West Africa, working as a film journalist in his early twenties, when he witnessed horrific starvation and death, while all he and his team could do was use their vocation as journalists to tell the story. This serendipitous meeting inspired Makah to do something for others. With the support and encouragement of Bob Bowers, our CEO, Soliant's Philanthropy Committee was born. "FileMaker in Action: Non-Profit Case Studies" session presented by Makah Encarnacao Our team of developers, business analysts, and project managers participate on a voluntary basis to provide services to non-profit organizations who otherwise could not afford custom software. The work is done in addition to their normal workload, often after hours and spans across all our practice areas: FileMaker, Web, and SalesForce. Together with Josie Graham, Makah vets the submissions and pairs them with the perfect volunteer. Makah summarized the story of each organization: what they do, what they needed, how we helped, and the difference it made to their operation. The Luke Commission Operation Comfort Rainforest Trust Prince Albert Food Bank St. Francis Center Altadena Mountain Rescue Team Researching Hope Women's Alliance for Knowledge Exchange These institutions do wonderful work, and now they are more effective, more efficient, more nimble with the help of a little expertise. I've never been prouder to be a part of the Soliant team. Visionary Bar I spent my refreshment break answering the questions in the Visionary Bar. The idea is for DevCon attendees to just walk-up to get answers and advise from FileMaker Business Alliance (FBA) members. I've done this for many years and always find it quite rewarding to help others. Helping at the Visionary Bar FileMaker in Action: Media and Arts Case Studies For the second session of the morning, I saw fantastic examples of FileMaker used by two art and media businesses. The first was Bryn Behrenshausen of Kalisher, a design house that creates and curates comprehensive art collections and has remote teams in six US cities. They have tons of information to keep track of: customers, designers, purchasers, owners, specs, pricing, and employee 100+ people. At the time Bryn joined Kalisher in 2014, Soliant Consulting had already built them a solution for managing their projects, quotes, vendors, and suppliers. Bryn shared his story of learning FileMaker development and now has built several modules of his own as well as refresh the original solution. He then shared some great tips for newer developers such as "Be Consistent with schema naming and script structure." It was great to hear Bryn's story. Matt Greger presented the second portion of the session. He shared how FileMaker was utilized to manage the tv spot traffic for 30 years for "As Seen on TV", the company who brings you those great infomercials for Flex Seal, Snuggie, Copper Fit, and so many other products. It was a fantastic example of something I've seen many times: FileMaker can be a fantastic hub for data. In Matt's solution, FileMaker pulls-in data from a variety of sources including Google AdWords, Amazon, Facebooks, YouTube, Microsoft Bing Ads, and tv vendors. The data is processed and aggregated and then accessed by analytic tools Tableau, Power BI, and TIBC Jaspersoft. Lunch Networking During lunch, I met with Mike Zarin. Mike attended my Wednesday presentation "Tackling Sync." He approached me following the session and asked if we could chat about the sync requirements for his project. It was great to hear about Mike's project and make suggestions for easier ways to solve his data posting needs. As the meal progressed others joined our table. We spent the remainder of the time sharing our story about how we got started using FileMaker. Several of us had been developing in FileMaker for more than 20 years while others were relatively new. Networking lunch: (front row) Mike Zarin, Dawn Heady, Lee Lukehart, Matthew Dahlquist, (back row) Jenna Lee, Jowy Romano, and Stephen Kerkvliet. Jim Medema (not pictured) graciously took the photo. From One to Many: Growing Your Consulting Firm Following lunch, I switched tracks and attended one of the FBA Day sessions. David Knight, the president of Angel City Data, presented a great session on growing your business. I especially liked David's message to "Learn to let go!" It's not stealing the boat; it's rowing the boat when you don't let go. Closing Session & Awards Presentation I was thrilled that the Women of FileMaker were awarded a FileMaker Excellence Award for their work in the community! This year, the Women of FileMaker provided scholarships so five first-timers could attend DevCon this year. They've created a Mentoring program and organized a Buddy program to pair DevCon first-timers with a seasoned pro to help attendees find their way around the conference. I can recall attending my first Women of FileMaker luncheon many years ago. There were maybe twenty of us eating at the hotel's restaurant. This year there were several hundred DevCon attendees at the luncheon. The growth of this group has been tremendous! This year Soliant Consulting was an honorable mention for the Excellence Award for education, an award we won in 2017. During the closing session the location for next year's DevCon was also announced: FileMaker Developer Conference 2019 August 5–9, 2018 Gaylord Palms Resort Kissimmee, Florida This year was my 19th consecutive DevCon, and each one is exciting, tiring, and inspiring. I returned with new ideas and determined to learn even more. I love seeing my friends from around the world and my Soliant family from around the country. Hope to see you there! FileMaker DevCon 2018: Training Day - Part One FileMaker DevCon 2018: Day 1 FileMaker DevCon 2018: Day 2 The post FileMaker DevCon 2018: Day 3 appeared first on Soliant Consulting.
Voir le billet d'origine

FileMaker DevCon 2018: Day 2

After a busy first day of FileMaker DevCon 2018, day 2 continued the theme of FileMaker being a Workplace Innovation Platform. FileMaker + Tableau, a Match Made in Data Heaven! The morning sessions included an eye-opening Tableau integration session from Vincenzo Menanno. In his session, Vincenzo demonstrated how one could use Tableau charting and graphing tools inside a FileMaker WebViewer and subsequently use Tableau's URL Actions, to call specific FileMaker scripts within your solution, which provided a seamless integration between FileMaker ("The Data Curation Tool") and Tableau ("The Data Slicing Tool"). Under the Hood: FileMaker WebDirect Scalability “Under the Hood: FileMaker WebDirector Scalability” session presented by Emmanuel Thangari (click image to enlarge) The late morning sessions continued with Emmanuel Thangaraj’s session. This session was great for learning the inner workings of FileMaker Server 17’s Multiple Web Publishing Engine (MWPE) and FileMaker Load Balancer (FLB), which increases the number of users that WebDirect can support and enhances server stability at the same time. I find I always come away from FileMaker’s “Under the Hood” sessions with something new and tangible that I can apply to my development projects. Data Cleansing for Data Managers and Consultants “Data Cleansing for Data Managers and Consultants” session presented by Molly Connolly (click image to enlarge) Following a delectable lunch, Molly Connolly had an insightful session on using FileMaker to scrub bad data from dispersant data sources. Using FileMaker’s calculation and scripting capabilities, Molly walked users through how to cleanse text formatting in specific fields and from spreadsheet data. This session was excellent for beginner and intermediate developers, and Molly organized her presentation in a linear way that built upon each technique that she has used over her many years of experience. Under the Hood: Data Migration My second "Under the Hood" session of the day (did I mention I love "Under the Hood" sessions?!?) was with Clay Maeckel on FileMaker's new Data Migration Tool. Earlier this year I wrote a blog post on this tool. He went into detail about the internal implementation of this tool and provided clarity regarding the Rules of Schema Matching between your source and clone files, and explained how this tool could be so fast at migrating your data to a new production file. Clay is one of the original authors of FileMaker's Draco engine (he started working for FileMaker the year I was born!) and his experience shined through in this session. Tackling Sync Later in the afternoon, our very own Dawn Heady presented her session, "Tackling Sync." Dawn started with focusing on five specific strategies for designing your sync solution: such as minimizing historical data, pre-populating the mobile app data, and pushing actions to server side when possible. She then discussed three scripting methodologies for completing a sync, which can be completed using import script step, transactional scripts, or web services. Dawn then uncovered how to use an external data source on the server using a global variable. What a creative solution to this challenge! Next, she demonstrated a working transactional sync solution that will be included with the session materials. From there, Dawn went into well-known FileMaker Sync solutions and discussed their setup process, along with the benefits and drawbacks to each. "Tackling Sync" session presented by Dawn Heady (click image to enlarge) Attendee Dinner Party After our Wednesday sessions, we went to the Attendee Dinner Party and had a wild Texas time! A live band with line dancing lessons, billiards, darts, and ping pong were some of the highlights from this event. Overall, this has been one of my favorite DevCons yet. The variety and polish of sessions have been so impressive and inspiring. The food has been consistently delicious with the bacon being truly remarkable, and I'm a man that knows good bacon. Click to view slideshow. I'm looking forward to what the final day of DevCon can bring to inspire us to create innovative workplace solutions. FileMaker DevCon 2018: Training Day - Part One FileMaker DevCon 2018: Day 1 The post FileMaker DevCon 2018: Day 2 appeared first on Soliant Consulting.
Voir le billet d'origine

FileMaker DevCon 2018: Day 1

The FileMaker Developer Conference has become, for me, a bit of a reunion. As a remote employee, FileMaker DevCon is a chance to hang out with my co-workers… and it feels like the FileMaker community at large is just an extension of that group. The sessions are good, too. Workplace Innovation Platform The morning kicked off with a Special Session, where Andy LeCates introduced FileMaker as a Workplace Innovation Platform. This framing hits the nail squarely on the head. They’ve summed up what we’ve always known—what we love to do—turning a complex conversation into a succinct story (and a sweet little video): I particularly enjoyed guest speaker Richard Cox Braden, who spoke about the difference between Creativity and Innovation. As a creatively challenged person, it was helpful to see that amorphous blob broken into distinct, progressive elements: Imagination -> Creativity -> Innovation -> Entrepreneurship. (Maybe I CAN do some of that…) Modular Programming with Card Windows My next stop was John Renfrew’s session on card windows. He had great advice on using them for modular, transactional user interaction. He has taken card windows beyond the default centered-highlight use, manipulating their sizing and placement to great effect. Women of FileMaker Luncheon One of the highlights of the day, as it is every year, was the Women of FileMaker luncheon. Our developer population is growing every year, and I love this chance to connect with and support each other. I left the luncheon with a new friend AND a pinkie promise to help each other apply to be speakers next year. Win-win! Afternoon Sessions I attended three fantastic sessions in the afternoon: Professional Development for All Ranges of Experience Molly Connolly helped me think through professional development goals (see the pinkie promise above). I particularly appreciate the encouragement to incorporate personal goals into my development plan. I WILL learn the second half of Für Elise some day… Flexible Reporting with Virtual Lists and ExecuteSQL Martha Zink rocked the virtual lists lesson. Now I’m FINALLY ready to use them all the time myself. Delight Driven Design - Transforming Designs from Good to Great Jordan Watson reminded me of good design principles. As ever, I do best with tangible rules to follow, and his clear Do This/Don’t Do That series was helpful. End of Day Fun The day wrapped up with a solid six hours of socializing. Some of us went to a baseball game, some went kayaking, but I chose to stay in the air conditioning and talk talk talk. (Texas is HOT, yo.) This rate of chatting may not be sustainable, but it's definitely my favorite part of DevCon. Click to view slideshow. There were so many good lessons from today, but the one I think I’ll apply daily: Every session – and workday – should end with Chuck Brown & The Soul Searchers' “I Feel Like Bustin’ Loose.” FileMaker DevCon 2018: Training Day - Part One The post FileMaker DevCon 2018: Day 1 appeared first on Soliant Consulting.
Voir le billet d'origine

FileMaker DevCon: Training Day – Part One

Although I’ve been going to DevCon for Lo These Many Years, and even had a role in shaping how Training Day has evolved, I’ve never had the chance to assist with a Training Day session, or for that matter even attend one. This year I had the opportunity to join my Soliant team-members assisting Bob Bowers with his advanced session, and I’d like to share a few observations from that. First I want to say that I wish I could have attended all the Training Day sessions, especially the one on User Experience, but sadly I’ve never learned to be in more than one place at a given time. Instead, I’ve accosted a couple of the presenters in the hallway to ask them how things went. I hope to speak with the rest of the presenters before the week is over and write another post sharing what I hear from them. Advanced 1 - Techniques & Advanced - Integration I’ll start with my report on the advanced session: Bob does a great job of taking complex topics and reducing them to their essentials, providing a foundation for exploring them in greater depth. Among other things, he guided people through: Javascript-driven data tables the ExecuteSQL function setting up ODBC connectivity, both with FileMaker pulling data from another data source and acting as a data source itself the structure of JSON data objects and how to use FileMaker Pro’s JSON functions the basics of cURL commands and how to incorporate them into FileMaker Pro’s Insert From URL script step connecting to APIs using the above and parsing the resulting JSON It was a lot of material, but as promised, he stripped it down to concepts that were easy to understand and put into practice. That said, when we came back from lunch, he announced that his strategy for staying on track would be to start talking faster. And so he did. I love helping people understand new concepts, so it was a treat to work as an assistant. My only disappointment was that generally, Bob made things so clear that people didn’t need me much. I learned a few things along the way myself, including an approach to looping through grouped data that involves looking at the first record in a sorted (grouped) record set, working with that record, then calculating the number of records that belong to the group and jumping past them to the next unique record. It’s simple, but I’ve always accomplished the same thing in a different way and was happy to be offered an alternative. Beginner 1 Next, here’s what Jim Medema told me about his beginner session. “My goal was to create a training such that brand-new people – we had people who had been introduced to FileMaker one or two weeks ago – could, by the end of the day, have built an app they could walk away and put into production. And it happened! We were inches away from the point where they could actually sell it. We had a woman come up really happy with the team of assistants – we had great people, Lee Lukeheart, Matthew Dahlquist, and Bill Nienaber – and she said, “Any time anybody raised their hand, they were attended to within moments. Whatever problems they ran into, they got solved.” One of the assistants told me afterward, ‘You were pushing the class pretty hard. I was working on a technical problem with somebody for a while. When I was done, I wondered how the group was doing. When I looked up, and there were people building charts, charging ahead, they were all with you.’ We also had some experienced engineers. One guy said, 'I don’t know if I belong here, I might be kind of bored so don’t be offended if I walk out.' But he stayed all day and told me at the end, ‘You have laid the complete groundwork for everything that I need to know to get started.’ He’d inherited a legacy system built in FileMaker 6 that finally needed to be rebuilt after running for 18 years. 18 years! Can you imagine? And he feels ready to go off and do that now.” I’d like to congratulate Jim for his skilled work as a trainer, and his commitment to helping new users get immediate success on the platform. User Experience 1 - Research, Mapping, and Validation Today I also talked to Matt O’Dell about his User Experience session. Matt was my team lead at FileMaker when I started in the Marketing department, and we’ve become good friends. A couple of years back, we ran off to Denver together to attend a design thinking training put on by Adaptive Path. He’s continued to charge ahead learning more about design and is committed to making it the focus of his work. I had a wonderful time helping out Bob and working with my Soliant colleagues, but my second choice would have been to spend the day with Matt. He has so much passion for user experience design. Here’s what he said to me today: “The full-day training involved following the design process to solve an actual problem. We pretended that we’d hired by FileMaker to improve the process of purchasing a DevCon registration. We started with research and followed all the way through to creating a prototype at the end. We had people build paper prototypes, which was a new experience for most people in the group. After they built them, we said, 'OK, now take that prototype and go out and find someone, just randomly find a DevCon attendee and test your prototype with them.' We taught them the basis of usability testing first and then just sent them out. People were asking, ‘Is this going to work? Are people going to trust us? Will they interact with us?’ -- but you know DevCon people, they’re a helpful bunch. When the trainees came back they said, ‘It was surprising! You just put the paper down in front of them, you tell them what you want them to try, and they just start tapping with their finger. Then you throw the next piece of paper down, and they pretend to type, it was crazy how well that works.’ You got all that feedback after building a prototype in only 30 minutes. The idea was not to prototype in FileMaker or any other software – not to get too invested in a given design – but to make it easy to throw away and try something else. Some people even managed to test more than once. They identified problems with their prototype, drew up new screens, and went out and found someone else to test with again. That’s how you do it! That was the a-ha moment for people. This isn’t just a fun little art project -- it actually works.” Hearing about all this from Matt, I especially liked how he got people on their feet and moving. They never touched their computers, so there was no opportunity to zone out and check their email. They stayed engaged every minute. He had some great assistants too – Alexis Allen, a brilliant design-focused developer, Steve Gleason, who has an advertising background, Karen Craig, who has an industrial design background, and Laramie Erickson, a project manager at iSolutions. I’m sure they made an amazing team. I hope I get the chance to help out in the future myself. Unfortunately, it sounds like the workshop wasn’t as well attended as Matt had hoped. That disappoints me since I strongly believe that a design-centered process really works. Right now I'm working with a pro-bono client through Soliant’s wonderful Philanthropy Program, and I’m incorporating design activities into our work together during the foundation. In the first few meetings, I was getting disjointed requirements that I couldn’t assemble into a clear narrative. But when we switched to a design-centered approach, everything immediately started coming into focus. We also started having a lot more fun. So that’s my report so far. Please stay tuned for another post once I’ve had the chance to talk with the other three trainers: Jeremy Brown, Matt Petrowsky, and Cris Ippolite. Happy DevCon! The post FileMaker DevCon: Training Day – Part One appeared first on Soliant Consulting.
Voir le billet d'origine

Now You See Them, Now You Don’t: Sidebar Panes in Layout Mode

Over the past few years, FileMaker has started incorporating collapsible sidebar panes into the design of FileMaker Pro. They started with modernizing the Script Workspace, then added a right-hand pane in the Specify Calculation dialog box, and now the canvas in Layout Mode also follows the same design pattern (see Figure 1): A new left-hand pane contains two tabs labeled "Field" and "Object". The Field tab contains the Field Picker, previously a floating palette, and the Object tab replaces the floating Layout Object window that was introduced in FileMaker 16. A new right-hand pane contains the Inspector. If you've ever lost track of where the inspector is, or whether it's open, this should be a welcome change. Figure 1. New sidebar panes in Layout Mode (click image to enlarge) Familiar keyboard shortcuts apply to both panes: If you press command-K (control-K on Windows), the left-hand pane opens and closes. This was previously associated with opening and closing the Field Picker palette. If you press command-I (control-I on Windows), the right-hand pane opens and closes. This was previously associated with showing and hiding the Inspector palette. Since the new panes are part of each window you have open in Layout Mode, they are controlled independently for each one. I like how these changes bring more consistency to the FileMaker Pro user experience and anchor key information in predictable locations. What else has changed? Figure 2. Updated "picker" and Field tab (click image to enlarge) Generally, the contents of each of these panes are the same as in their FileMaker 16 equivalents, with a few notable differences (see Figure 2): The new "picker" includes icons that make it easier to recognize each field type on sight. (As in FileMaker Pro 16, you can change the field type from the picker rather than going to Manage Databases). You can now set field control styles directly from the Field tab, where previously you could only do this using the Inspector. But there’s one other significant change: in the upper left of the screen there is no longer a "book" to page through layouts, or a slider to move through them quickly. I assume that this change was made in the spirit of simplifying the interface and to help prevent confusion between Layout Mode and Browse Mode, which until now used the same interface elements in similar ways. As an advocate for new users, I’m very much in support of making the new user experience more intuitive, but I have to say that I’m going to miss these elements. For me, it's second nature to navigate from one layout to another using the book, or to jump to the first or last layout in the file using the slider. Fortunately, you can still use the same keyboard shortcuts for moving between layouts one at a time: ^↑ (control-up arrow) to move backwards and ^↓ (control-down arrow) to move forwards. If you have trouble with these on Mac, check your Mission Control settings in System Preferences. How does it feel? I'm still getting used to it. For example, here’s a window behavior that caught me off-guard: if your window is positioned in the middle of your screen with ample room to the right and left of the window, then switching to layout mode will expand the window on both sides to accommodate the two docked panes. That’s all well and good. But if for instance your window is positioned all the way to the left of your screen, switching to layout mode will move the content area of your layout to the right. I find this a little disorienting, but it may be something I'll adjust to over time. Additionally, when working on some legacy systems with wide layouts, I feel a little cramped if I have both panes open at once. That said, a well-designed layout shouldn’t get excessively wide, where “excessively” is a subjective term but has to do with how much information the user can scan easily. Most layouts should fit just fine on a modern monitor – even in layout mode showing both panes – while leaving room to work in the “scratch” or non-display area as well. However, if you ever find yourself limited in horizontal screen space, or if you just want to position the inspector close to the objects you are working with, do not despair. You can still work with multiple Inspectors, each of which opens as a familiar floating palette. Simply open a second Inspector by choosing the menu item View > Inspectors > New Inspector, and then close the right-hand pane. Note that there isn't a similar option for opening the Fields tab or Objects tab as a floating window. I’m curious what working habits I’ll develop over time: when the right-panel Inspector will feel solid and reliable, and how often I’ll finally need a floating one. I can tell that opening and closing the left-hand panel as needed will soon become second nature. What do you imagine your preferences will be? If you have any questions about this or any other new features included in FileMaker 17, please contact our team. We’re happy to help your team determine the best way to leverage them within your FileMaker solution. The post Now You See Them, Now You Don’t: Sidebar Panes in Layout Mode appeared first on Soliant Consulting.
Voir le billet d'origine

How to Enhance Your FileMaker Solution with Microservices

There are many ways to boost your FileMaker solution's capabilities by going outside of the scope of typical platform functionality. For example, you can adopt one of the many plugins available on the market; you could partner with an experienced developer to customize functionality from the ground up and integrate with the tools and APIs provided by other software. A good example is emailing. For this, the FileMaker platform has native capabilities. You can leverage plugins to get additional features or integrate with any of the Outlook APIs. However, you have an often overlooked third option – microservices. Leveraging Microservices in FileMaker Microservices are aptly named – they’re pieces of functionality that perform small tasks. The term refers to a software architecture style of connecting small features together to create one larger cohesive system. Leveraging this type of development makes sense as your business evolves, and your solution requires new functionality for more use cases, or to have functionality shared among different systems built on different platforms.
Microservices also restrict one addition or bug from crashing an entire system by limiting access to one small part of it. This simplifies deployment and security of new features. Microservices are tiny web services. The ‘micro’ part refers to the number of lines of code in the functionality. You can write microservices in any language that support web services, including PHP, Ruby, .NET, Python, Java, JavaScript, and more. They promote agile systems, as their structure is lightweight, easy to test, and simple to build onto existing systems. In fact, they’re often recyclable, so you can reuse them and share them across other applications and platforms. Microservices v. Plugins in FileMaker Microservices also presents distinct advantages over plugins in FileMaker: Your choice of code: You can create microservices with a wide variety of coding platforms. You can only create plugins using the C programming language. Available on all FileMaker platform clients: You cannot use plugins in FileMaker Go unless you make special provisions. Similarly, plugins require a special version to work on Filemaker Cloud. However, you can use Microservices with any type of FileMaker client. No platform dependencies: You must compile plugins for Mac, Windows, and Linux to cover the whole platform. Microservices work out-of-the-box and are agnostic to the client’s platform. Limitations of FileMaker Just like plugins, you can use microservices to add functionality to your solution that the FileMaker platform does not offer itself. For example, FileMaker does not provide support for Regular Expressions (RegEx), which work well for finding patterns in text. Say that you have a bunch of text from an email and you need to check if it contains a US or Canadian postal code and extract that from it. A RegEx expression of ^((\d{5}-\d{4})|(\d{5})|([A-Z]\d[A-Z]\s\d[A-Z]\d))$ would find instances such as “60607” and “60607-1710” for Chicago, IL or “L9T 8J5” for Milton, ON. While FileMaker does not do so natively, many other platforms construct a few lines of web service in the following: .NET Java JavaScript PHP Ruby Python In FileMaker, you would use the “Insert From URL” script step to call the microservice and then pass it the text and the expression you’d like to use on it. The microservice would send back the list of matches in JSON format to easily parse with the native JSON functions in FileMaker. New Ease of Adopting Microservices Leveraging microservices within FileMaker has been possible for years but has become much easier. Adopting microservices is easy for two big reasons: Since FileMaker 16, calling a web service and working with its response is extremely easy. The revamped “insert from URL” script step and its support for cURL takes care of that. Every FileMaker Server already enables a web server (IIS on Windows, Apache on macOS). In addition, every FileMaker Server comes with a Node.js server already deployed, ready for you to use. You already have the platform to deploy the microservice on. Examples of Microservices in FileMaker My team and I have built a dozen microservices for clients’ FileMaker solutions over the years. For example, we’ve developed forecasting capabilities and specialized data reporting to fit within a legacy FileMaker solution for a biotechnology research organization. Our team has also worked with a national media company to build a connection between its FileMaker solution and Okta identity management capabilities for secure user login. Other examples include API-to-API mapping when receiving data from SAP systems into FileMaker and pushing data from FileMaker into financial systems. The possibilities are endless and truly depend on your needs and goals within your FileMaker solution. If your FileMaker solution needs functionality related to difficult or specialized computations, XML and JSON parsing, or API-to-API mapping, I recommend considering building microservices for your system. Building Your Microservices If you have questions or would like to add microservices to your FileMaker solution with an experienced partner, contact our team. Our experience in microservices extends well beyond FileMaker, and we’re happy to provide additional insights for your organization and evolving solution. The post How to Enhance Your FileMaker Solution with Microservices appeared first on Soliant Consulting.
Voir le billet d'origine

Women of FileMaker Launches DevCon Buddy

Women of FileMaker is excited to launch DevCon Buddy, a new program empowering experienced DevCon goers to assist new attendees in maximizing their DevCon experience. The organization welcomes all experienced attendees to sign up on the Women of FileMaker site. All genders welcome! Announcing DevCon Buddy What:  New Attendee Meetup at DevCon 2018 When: Monday, August 6, 2018 at 5:00 pm Where: Grapevine B To help DevCon first-time attendees get the most out of the annual FileMaker conference, the Women of FileMaker have launched a new program, intended to assist in the pairing of new attendees with experienced conference attendees. It all starts with the New Attendee Meetup on the first night of DevCon. What to Expect at DevCon Buddy The organization encourages experienced DevCon attendees to join this meetup as a “buddy” who can guide the new attendees throughout the conference. Women of FileMaker will provide a special name tag for the new attendees to identify and connect with the Buddies. They encourage new attendees who are interested in having a buddy to approach anyone with a name tag and vice versa. Once attendees make a connection, they can exchange information and keep in contact during the conference. DevCon Newcomers Women of FileMaker encourages you to attend the New Attendee Meetup on Monday, August 6, at 5 pm in Grapevine B. Look for people with the Buddy name tag, and approach someone you think looks interesting! Then exchange information to keep in touch over the conference. Perhaps you can have lunch together or meet during a break to catch up and discuss upcoming sessions. DevCon Veterans If you would like to help out a new attendee, you can sign up as a buddy here. You will receive email updates and reminders to attend the New Attendee Meetup and instructions on how to get your name tag. Women of FileMaker appreciate your efforts to volunteer your time and make someone else’s DevCon experience more enjoyable! How It All Started
The idea for a DevCon buddy system started on the Dream Board at the Women of FileMaker booth at DevCon 2017. A booth attendee took the time to make this suggestion, and the organization was happy to make it a reality. Dreams do come true  The post Women of FileMaker Launches DevCon Buddy appeared first on Soliant Consulting.
Voir le billet d'origine

Extending FileMaker PDF Functionality with DynaPDF Demo

Did you know you can do more with pdfs in your FileMaker solution with DynaPDF? Download our complimentary resource to learn about capabilities with this product. This file contains a simple table containing pictures and images of different types of fruit. It helps you create a listing of each fruit with clickable links to different pages throughout the PDF, demonstrating the capabilities of DynaPDF within your FileMaker solution. Read the blog post…
Complete the form to receive the demo: Trouble with this form? Click here. The post Extending FileMaker PDF Functionality with DynaPDF Demo appeared first on Soliant Consulting.
Voir le billet d'origine

How to Use FileMaker 17 to Connect the Data API and Your WordPress Website

Connecting Your WordPress Website and FileMaker Solution In this post, we demonstrate how you can submit a form built with a freely available WordPress plugin, Ninja Forms. This makes building web forms easy while staying compliant with a WordPress theme. This process also supports responsiveness of your WordPress deployment. You do not need access to the backend MySQL database, which is not always easy or available. This makes it very portable and convenient to add as a WordPress administrator. You only need to add a custom action to the Theme Functions ("functions.php") file in WordPress and specify that custom action in your Ninja Forms configuration. Pre-FileMaker Strategy: Custom Web Publishing Previously in FileMaker 16 and prior, you could send data to a CWP (custom web publishing) script that could include the FM API for PHP and use that to insert to your database. This required a web server running PHP, possibly on the same machine where FileMaker Server was running. Simplified Process with the FileMaker Data API That changes in FileMaker 17. Now, you can communicate directly with the newly available Data API to insert a new record in your database with no other pieces needed or external APIs to load. Both methods use a WordPress function, WP_Http(), to make the external requests and submit all form data entered by end users. In WordPress, go to Appearance->Editor, then select the Theme Functions file and insert the following code, with adjustments made to the variables at the top of the function. add_action( 'fm_WordPress_action', 'fm_data_api_form' ); function fm_data_api_form( $form_data ){ // set variables $myServer = 'https://your_server_name/'; // the url of your server $myUser = 'Username'; // your username $myPass = 'password'; // your password $myFile = 'Your_File'; // your file name $myLayout = 'Your_Layout'; // your layout name // authenticate and get token $myEndpoint = 'fmi/data/v1/databases/' . $myFile . '/sessions'; $post_url = $myServer . $myEndpoint; $headers = array('Authorization' => 'Basic '.base64_encode("$myUser:$myPass") , 'Content-Type' => 'application/json'); $request = new WP_Http(); $response = $request->request($post_url, array( 'method' => 'POST', 'body' => $myJSON, 'headers' => $headers )); if($response){ //inspect response $responseObj = json_decode($response['body'], false); $responseCode = $responseObj->messages[0]->code; if($responseCode == '0'){ $responseToken = $responseObj->response->token; // format json from form data $form_array = array('fieldData'=>array()); foreach( $form_data['fields'] as $field ) { // get key/value pairs into an array by themselves if($field['key'] != 'submit_button'){ $form_array['fieldData'][$field['key']] = $field['value']; } } $myJSON_Data = json_encode($form_array); // insert record $myEndpoint = 'fmi/data/v1/databases/' . $myFile . '/layouts/' . $myLayout . '/records'; $post_url = $myServer . $myEndpoint; $headers = array('Authorization' => 'Bearer ' . $responseToken , 'Content-Type' => 'application/json'); $request = new WP_Http(); $response = $request->post( $post_url, array( 'method' => 'POST', 'body' => $myJSON_Data, 'headers' => $headers ) ); if($response){ // ok } else { // insert err } // log out $myEndpoint = 'fmi/data/v1/databases/' . $myFile . '/sessions/' . $responseToken; $post_url = $myServer . $myEndpoint; $headers = array('Content-Length' => '0'); $request = new WP_Http(); $response = $request->post( $post_url, array( 'method' => 'DELETE', 'headers' => $headers ) ); } else { // authentication err } } } Enabling Ninja Forms To use with Ninja Forms, we configure the "Emails & Actions" configuration (see Figure 1) for the form we are working on, enable the "Web Hook," (see Figure 2) and give it a Name and Hook Tag. The Hook Tag here is important, and it needs to match the action name given in the custom action. In the code above, it is the first parameter of the add_action function, where we name it "fm_WordPress_action". Figure 1- Use "Emails & Actions" to configure your form Figure 2- Use the slider to enable the "Web Hook" action. If we break down the above code, we see the performance of two steps. The first is to authenticate to the database and get a token back. Once we have successfully authenticated and have a token, we use that in the header of our HTTP request to submit JSON to the Data API that completes our record creation.
There are RESTful endpoints used to perform both authenticating and record creation. You can find these in the Data API documentation. First, we send credentials using a HTTP header and receive a token upon successfully authenticating to the database. Once we have a token, we do a bit of cleanup work to get our data out of the form, where we reference the "Field Key" from the form builder, and correctly format the JSON we will submit as the body of our request to create a new record. We define the Field Key for each field in the form (see Figure 3). Figure 3 - Define the Field Key for each field. Of course, all field keys need to match with the field names in your database. Also, this assumes that you have correctly set up security in your FileMaker file, and have enabled the "fmrest" privilege set for the account used to perform this action (see Figure 4). Figure 4 - Set up security in your FileMaker file to enable the "fmrest" privilege set. You will see the File and Layout specified in the endpoint we reference, so we only then need to include the JSON containing our record data, in addition to passing our authentication token as a header. All this is simple enough, once you have achieved a cursory understanding of REST architecture and how to work with it. Licensing and Costs FileMaker Server includes an amount of data transfer at no additional cost. The good news is that the amount already provided -- 2GB of data, per user each month -- should be more than enough for most every use case. That does not include container data and only counts toward outbound data transfer. Data is also shared for all users. For example, if you have a FileMaker Server with five licensed users, that would be 10 GB/month, or 120 GB for the year. That far exceeds the amount that most solutions see in data transfer even with FileMaker Pro pushing user interface and containers. If you go beyond this very high threshold, you would likely want to pay additionally for the support to handle that kind of traffic. Considering the cost of FileMaker Server effectively includes this licensing, I would not hesitate to recommend using the Data API in any new development. Building on Your FileMaker-WordPress Integration This is just an example showing a basic integration of a WordPress form used to create a record in your database. You could build on this example to add more robust error handling or additional functionality. Integrating with FileMaker is easier than ever! Hopefully this article gives you some insight into how you can extend your FileMaker applications in ways previously not conceived. If you have any questions or need additional help integrating your FileMaker solution with your website, please contact my team. We’d love to see if we’re a good fit for your next project.
References FileMaker Data API Guide FileMaker Licensing FAQ Ninja Forms Your Guide to Using the FileMaker Data API to Enhance Your Application The post How to Use FileMaker 17 to Connect the Data API and Your WordPress Website appeared first on Soliant Consulting.
Voir le billet d'origine

Extending FileMaker PDF functionality with DynaPDF

Generating PDFs in FileMaker FileMaker Pro has had the ability to generate PDF documents since version 8, providing basic functionality that has remained largely unchanged since its introduction. Using FileMaker Pro’s plugin architecture, we can extend this functionality to provide the following enhancements: Search PDF document text Append multiple PDF documents in non-sequential order Add a watermark to a PDF document Password protect a PDF document Delete pages from a PDF document Add clickable “Page Links” to move to different pages throughout a PDF document Compress a PDF file to prepare them for transmission over the internet An intermediate to advanced FileMaker Pro developer can leverage these features to create even more dynamic PDF documents inside FileMaker Pro. Setup and Installation The MBS FileMaker Plugin by MonkeyBread Software has one of the most feature-packed plugins in the FileMaker community with over 5,000 functions to date and developer Christian Schmitz continues to regularly add new features. As a side note, one of my favorite features is a more colorful script syntax coloring but there are many more that can be useful in a variety of situations. Using the MBS plugin, we can extend its functionality even further using the DynaPDF library. DynaPDF is a popular library for working with PDFs that the MBS plugin allows us to access using functions. For DynaPDF alone, you can use 484 custom functions inside FileMaker Pro. To setup this up, first download the MBS plugin. Included in that download is the DynaPDF library in both macOS and Windows formats. Read the installation documentation to know where to install the DynaPDF library on your computer. It is also worth noting DynaPDF and MBS FileMaker Plugin can also run as a server-side plugin, allowing you to leverage the power of server processing and not require an individual license for DynaPDF on every end user computer. Plugin Functions & Scripting All DynaPDF functions use a dot syntax starting with “DynaPDF.” DynaPDF requires some sequential steps to manipulate a PDF document. Using the “Set Variable” step we can increment the DynaPDF library and create a memory space inside FileMaker Pro. Here is an overview of this process using FileMaker script steps: #Initialize new PDF instance Set Variable [$pdf; Value:MBS ( "DynaPDF.New" )] #Convert path to DynaPDF native path Set Variable [$targetPath; Value:MBS( "Path.FileMakerPathToNativePath"; $fruitPath )] #Open Existing PDF into memory Set Variable [$x; Value:MBS ( "DynaPDF.OpenPDFFromFile" ; $pdf ; $targetPath )] After we call these initialize functions, we can perform any number of DynaPDF functions to this PDF we have brought into memory. Check out a complete listing of DynaPDF functions. To complete the PDF manipulation process and save our changes to the PDF document we perform the following steps: #Denote where to save changes to PDF file, could be same path or different path Set Variable [$x; Value:MBS ( "DynaPDF.OpenOutputFile" ; $pdf ; $targetPath )] #Save will commit those changes to that path Set Variable [$x; Value:MBS ( "DynaPDF.Save" ; $pdf )] #Release clears the memory space, allowing a new instance of "" to be created. Set Variable [$x; Value:MBS ( "DynaPDF.Release" ; $pdf )] It is important to note that you can redirect the output of your changes to a different file path than the one you started with in the initialization functions earlier. Download Our Demo File Get the demo file You can use our complimentary demo file to demonstrate a few capabilities of DynaPDF. This file contains a simple table containing pictures and images of different types of fruit. The goal is to create a listing of each fruit with clickable links to different pages throughout the PDF. First, DynaPDF allows us to create two PDFs and then combine them in the reverse order. We can parse the text of each page of the PDF to determine the page number of each fruit in the document. After combining the PDFs, we can add “Page Links” into specific locations on each page, allowing us to hyperlink within our combined PDF document. We use this in two ways --  first, by having a table of contents page that allows us to jump to a specific fruit on a page and second, by having a button on each page that allows us to return to the table of contents page. This minimizes the need to scroll through each page of the PDF, creating a more fluid user experience. Enhancing FileMaker Pro with PDF Document Manipulation With some script steps and external plugin functions, we see how powerful we can make FileMaker Pro by manipulating PDF documents. We can link around a PDF document dynamically, combine multiple PDF documents in different order and parse text from specific pages in a PDF, and much more. PDF functionality has remained the same for almost 15 years in FileMaker Pro. By using FileMaker Pro’s plugin architecture we can extend that functionality to build even more powerful custom applications. What kind of apps could you build with this powerful set of PDF tools? Next Steps for Your FileMaker Solution If you have any questions about this functionality or seek a development partner in using it to take your FileMaker solution to the next step, please contact our team. We’re happy to provide additional insights and determine if we’re a good fit for your next project. References MBS FileMaker Plugin DynaPDF from MBS FileMaker Hacks Blog Post The post Extending FileMaker PDF functionality with DynaPDF appeared first on Soliant Consulting.
Voir le billet d'origine

JSONPath in FileMaker Demo

Leveraging JSON to pull additional information into FileMaker introduces new opportunities for integration with web services and RESTful APIs. The functionality allows you to strengthen your FileMaker solution by querying specific JSON data elements. Learn more about how this functionality can empower your organization by downloading our complimentary demo file on setting up your JSON path. Read the blog post… Complete the form to receive the file: Trouble with this form? Click here.
The post JSONPath in FileMaker Demo appeared first on Soliant Consulting.
Voir le billet d'origine

JSONPath in FileMaker via the Web Viewer

Getting information from JSON opens up FileMaker for more integration with web services and RESTful APIs. With the Data API (beta in 16, v1 in 17) and Admin API (FileMaker Server 17), JSON will likely continue to become more important and more integrated with FileMaker. Querying JSON Data Elements The JSONGetElement function lets us query specific JSON data elements, using either an object name, an array index, or a path. The key is there that these must be known values. We need to either know something about the JSON structure if we want specific elements. We cannot query or search JSON based on certain criteria. Alternatively, we could get everything from the JSON object into a field or variable, and then apply some rules after the fact. Maybe we want to compare information or look for elements that fit certain requirements, such as a price above or below certain thresholds. Maybe we want to look for arrays that are missing certain elements, such as ISBN numbers in an array with book information. In XML there’s a syntax called XPath that uses expressions to find information in nodes or node-sets. There’s a similar concept that allows us to extend JSON, taking advantage of a JavaScript library called JSONPath, and FileMaker’s own Web Viewer to act as the engine for the JavaScript library. This lets us feed in JSON and a JSONPath expression into a Web Viewer, and return the result via the fmpurl protocol. FileMaker and Web Viewer Integrations Web Viewer and JavaScript integration is nothing new. A Google search for FileMaker and JavaScript integration will likely bring up many examples of such integration. Some examples include FileMaker reaching out to external APIs. Other examples use what’s called a “data uri” to render web pages directly in your web viewer. Using a “data uri” you can include CSS and JavaScript in your HTML code. Setting Up Your JSON Path The included demo file has a field for our jsonpath.js library. This is an open source file and our only dependency. When implementing this in a real solution, you’d likely place this in a resource table and only call it as needed. Next, we have a field for our JSON. This can be an object or array. Your own process may store the JSON in a variable, but the demo file uses a field so that we can see the source values as we’re testing our JSONPath expressions. Additionally, we have an “input” field, which is the JSONPath query. Instead of JSONGetURL, JSONPath uses expressions in a text format to search the JSON for matching results. Then, we have a result field so we can see the result of our JSONPath query for certain values from the JSON using the jsonpath.js library. Both the input and result fields also could be turned into variables. Finally, we have an empty Web Viewer object, with a name of “WebViewer” so we can point to the object as needed. Scripting Your JSON Path There are two short scripts, “JSONPathInput” and “JSONPathWrite.” The first script, “JSONPathInput,” builds HTML and sets this HTML into the Web Viewer. As mentioned above, we can use “fmpurl” inside HTML to call FileMaker scripts. The last action in our “JSONPathInput” script does just this, calling the “JSONPathWrite” with a script parameter. That script parameter is the result of the JSONPath query, and all the script does is set our Result field with this value. The process is simple enough, but the complexity and power lie in the HTML that we create. HTML The HTML inside the script takes up only a few lines of code. (Note: The \ characters are there to escape double quotes inside the text.) Substitute ( "data:text/html, <!DOCTYPE html> <html lang=\"en\"> <head> <script type=\"text/javascript\"> **JS_FUNCTION** </script> <script type=\"text/javascript\"> var json = **JSON**; var p = jsonPath(json, \"**INPUT**\" ); var r = JSON.stringify(p); var url = \"fmp://$/" & Get ( FileName ) & "?script=JSONPathWrite&param=\" + r ; window.location = url ; </script> </head> </html>" ; [ "**JS_FUNCTION**" ; zResources::JSONPath ] ; [ "**JSON**" ; zResources::JSON ] ; [ "**INPUT**" ; zResources::Input ] ) Let’s break this apart into smaller chunks. First, the HTML is wrapped inside a FileMaker substitute function. Skipping briefly to the last lines of code, we see three substitutions. Each bit of text bracketed by the double asterisks gets switched out with values from our fields. These three values, such as **INPUT**, appears in the HTML code as placeholders. We’ll tackle each piece separately. Our text starts with “data:text/html,” which means we’re constructing a web page. Next, we have some HTML specific tags at the beginning and end: <!DOCTYPE html> <html lang=\"en\"> <head> . . . </head> </html> Inside this HTML we have two JavaScript functions. The first function, represented by our **JS_FUNCTION** placeholder, represents the jsonpath.js library. This brings the library into our web page. The second function is where we write out how to apply the jsonpath.js library against our JSON and our JSONPath expression. <script type=\"text/javascript\"> var json = **JSON**; var p = jsonPath(json, \"**INPUT**\" ); var r = JSON.stringify(p); var url = \"fmp://$/" & Get ( FileName ) & "?script=JSONPathWrite&param=\" + r ; window.location = url ; </script> We declare our JSON value as a variable inside JavaScript. Then we declare another variable with the result of sending our **INPUT** or query from the field into the JSONPath function. Ordinarily, this might be enough, but sometimes JSONPath returns objects or arrays, and these might not display properly in certain browsers. Instead, you might see a value like “Object object,” so we use the built-in “JSON.stringify” function to convert objects into strings. Finally, we set a variable to the fmpurl address of our “JSONPathWrite” script, with the stringified variable as the script parameter, and then feed this url into the Web Viewer. How does JSONPath work? Let’s assume we have some JSON: { "store": { "book": [ { "category": "reference", "author": "Nigel Rees", "title": "Sayings of the Century", "price": 8.95 }, { "category": "fiction", "author": "Evelyn Waugh", "title": "Sword of Honour", "price": 12.99 }, { "category": "fiction", "author": "Herman Melville", "title": "Moby Dick", "isbn": "0-553-21311-3", "price": 8.99 }, { "category": "fiction", "author": "J. R. R. Tolkien", "title": "The Lord of the Rings", "isbn": "0-395-19395-8", "price": 22.99 } ], "bicycle": { "color": "red", "price": 19.95 } } } With JSONGetElement, to pull out the ISBN we’d need to loop through the “book” array and extract the value using something like: JSONGetElement ( JSON ; "[2]isbn" ) The [2] represents the 3rd index in the book array. The first two indexes contain no value for ISBN, but we’d need to loop through all of them to pull out what we need. With JSONPath, all queries begin with “$” followed by a path with optional operators. The “$” represents the root object or element. Dot and bracket notation is used in ways similar to the JSONGetElement function. The power of JSONPath comes in the ability to traverse JSON and slice out elements, with the logic built into the JSONPath expression. Say you need to know which elements in the above JSON have an ISBN value. You can get the array values with this query. $[?(@.isbn)] If you want to find titles of books that don’t have an ISBN, then you flip this around and append the title key: $[?(!@.isbn)].title To find books where the price is greater than some value, you can use comparison operators. $[?(@.price &gt; 15)] This “stringify” action in our JavaScript function will wrap your result inside a JavaScript array, so some additional manipulation is required. If your result contains a single value, even if this is object, you can use JSONGetElement ( Result ; “[0]” ) to strip off the “[ ]” brackets. For arrays that contain multiple values, you can use JSONListKeys to determine how many values there are in your array and process these with the native JSON functions. This query returns one book object where the book costs more than 15: $[?(@.price > 15)] [{"category":"fiction","author":"J.R.R.Tolkien","title":"The Lord of the Rings","isbn":"0-395-19395-8","price":22.99}] In the above example, to get the title, use JSONGetElement ( Result ; “[0]title” ) and this will return “The Lord of the Rings.” For a more complete review of all options and operators available in this JSONPath Javascript library, refer to this article on JSONPath. Get the Demo file Download the JSONPath demo Why Use JSONPath? There is some overlap with JSONPath and JSONGetElement. $.store is the same as JSONGetElement( JSON; “” ) and returns our entire JSON. FileMaker reorders JSON based on the keys, so they might look different, but it’s the same content. JSONPath allows us to extend our applications that use JSON to both analyze and extract information, almost like searching the content, without first pulling apart as we’d need to do with JSONGetElement and additional scripting. Caveats with JSONPath There are a couple of considerations with fmpurl. Security settings will need to be updated to allow the “fmpurlscript” extended privilege set for any accounts that use this process. If you have multiple versions of FileMaker open, running the script with the “wrong” version may fail. If the fmpurl is bound to FileMaker 16, and you try to use it in 17, you might get an error. Leveraging New Features in FileMaker 17 If you have any questions about how to benefit from JSONPath and other new features in FileMaker 17, please contact our team. We’re happy to help your team determine the best way to leverage new functionality for your FileMaker solution. The post JSONPath in FileMaker via the Web Viewer appeared first on Soliant Consulting.
Voir le billet d'origine

FileMaker Server 17 and the Crucial ‘stats.log’

stats.log on FileMaker Server There is no log file more important on your FileMaker Server then the ‘stats.log’. It delivers invaluable information on how your deployment is behaving across the four traditional bottlenecks of any server: Is the processing power sufficient? Can the disk speed handle the volume? Is there enough memory available? Does the network card give me enough bandwidth? The information in that log will help you troubleshoot performance bottlenecks in your solution. The stats.log also will help you plan and extrapolate whether the current server can handle any new load you have in mind. It will inform you if it is safe to use more PSoS or more server-side schedules or add a few WebDirect users. Without this log, you are, in effect, flying blind. stats.log in FileMaker Server 16 In FileMaker Server 16, the toggle to enable this log was in the Admin Console as shown in Figure 1: Figure 1 - Toggle for stats.log in FileMaker Server 16 Admin Console (click to enlarge) The statistics themselves were visible in the Console too as shown in Figure 2: Figure 2 - Statistics shown in FileMaker Server 16 Admin Console (click to image to enlarge) That is not so anymore in the new FileMaker Server 17 Admin Console. stats.log in FileMaker Server 17 In the FileMaker Server 17 Admin Console, you can only toggle the top call stats log. To view any of the logs, you must download them (see Figure 3): Figure 3 - Download log files in FileMaker Server 17 Admin Console (click image to enlarge) Finding the stats.log in FileMaker 17 As per my previous blog post on missing features in FileMaker 17 illustrates, you may thiink the stats.log is no longer available in FileMaker Server 17. Rest assured: it is still there. However, it is also still turned off by default. (I really wish it wasn’t.) Because we do not get visual reminders of of this in the new Admin Console, you may easily forget about it. Turning on the stats.log in FileMaker 17 The very first thing I do when I install a new FileMaker Server or log into one already running is turn on the stats.log. I do not like flying blind… In FileMaker Server 17, you can only do so from the command line on the server itself – or through a secure tunnel to the command line on the server (see Figure 4): Figure 4 - Turn on the stats.log in FileMaker Server 17 (click image to enlarge) While you are there you can also enable the Client Stats log: fmsadmin enable clientstats Figure 5 - Verifying the logging interval and log size setting (click image to enlarge) That client stats log as well as the top call stats log will turn themselves off on every FileMaker Server restart, but the regular stats.log will remain on after issuing this command. Next, I will verify what the logging interval is and how big the log size setting is: fmsadmin get serverconfig: The default settings of logging every 30 seconds and keeping the log size at 40MB is usually sufficient. If you want to change them, the command would be: fmsadmin set serverconfig statsinterval=15 This, for example, would change the interval to 15 seconds. Accessing Your Data in the stats.log Now that we can sleep easy knowing our server deployment will log valuable performance data, how do we get the data when we need it? You cannot download the stats.log and the ClientStats.log from the Admin Console. You will need to grab them from the FileMaker Server logs folder (see Figure 6). Figure 6 - Get the stats.log and ClientStats.log from the FileMaker Server logs folder (click image to enlarge) Questions About the stats.log As always: post any questions on or as a comment on this blog, and we’ll be happy to help out. For a full run-down of the new Admin Console, the configuration options that are available from the command line, the new Admin API and the new Console, and a dedicated Tech Brief on monitoring your FileMaker Server, download comprehensive white papers. Leveraging New Features in FileMaker 17 If you have any questions about how to benefit from the features in FileMaker 17, please contact our team. We’re happy to help your team determine the best way to leverage new functionality for your FileMaker solution. The post FileMaker Server 17 and the Crucial ‘stats.log’ appeared first on Soliant Consulting.
Voir le billet d'origine

JavaScript App Extension to the FileMaker Server 17 Admin Console

New FileMaker Server Admin Tools FileMaker Server 17 includes exciting new offerings, including a Data API out of beta and a brand new Admin API. The FileMaker Server 17 Admin Console has been redesigned with a fresh, new look. Now, some features that were available in the Admin Console in previous versions of FileMaker Server are no longer available in it. For instance, “send message” and “verify database” schedules can no longer be created from the console. Setting the maximum number of hosted files, enabling/disabling the stats log, enabling/disabling the XML and the PHP APIs, are no longer available from the Admin Console. Download the tech briefs co-authored by Wim Decorte and Steven H. Blackwell for a comprehensive review of how the FileMaker Admin Console has changed in FileMaker 17. Read the blog posts: FileMaker Server 17 Admin Console, Admin API, and Admin CLI FileMaker Server Monitoring in FileMaker 17 FileMaker Server 17 and SSL Changes You Need to Know As explained in much greater detail in those white papers, most of the functionality in previous versions of the Admin Console has not gone away, but it’s no longer available through the Admin Console; instead, we now need to rely on the Admin CLI and/or the Admin API to use and/or set some of those features and options. Moving Forward in FileMaker 17 At Soliant Consulting we thought it would be nice to have a convenient, web-based means from which to administer more aspects of FileMaker Server than what the stock Admin Console offers. To do so, we are leveraging the new Admin API to simplify tasks that are no longer available via the native Admin Console, but also to do things that were not possible before 17, like visualize and manipulate FileMaker Server schedules in a calendar view. Introducing the FileMaker Server Admin Console Extension We set out to build a web-based extension of the native Admin Console, an app called FMS ACE (FileMaker Server Admin Console Extension). Our team is sharing it with the FileMaker community for free, in an open source form for anyone to modify and extend. This version of FMS ACE provides the following enhancements: Display all schedules on the server in a calendar view Filter the schedule display by schedule type Create "verify database" schedules Create "send message" schedules We do not intend for FMS ACE to replace the native FileMaker Admin Console. Rather, we see it as an easy-to-use complement to the stock Admin Console. You can run both of them side by side in a web browser. FMS ACE is a JavaScript app, just like the native FileMaker Server 17 Admin Console. FMS ACE leverages the new Admin API in FileMaker Server 17 to obtain all schedules in a FileMaker Server, and it displays them in a calendar format for you to see at a glance all schedules defined in your server. You can choose to see them in a “month”, “week”, or “day” view, and you can create and delete schedules right from the calendar in FMS ACE. Your edits are automatically pushed to your FileMaker Server. Sound appealing? Keep on reading! A Few Words on the FMS Admin API The new Admin API in 17 allows us to interact with FileMaker Server via RESTful calls. For a complete list of the calls available, refer to the Admin API documentation installed with FileMaker Server 17 at https://<your_server_here>/fmi/admin/apidoc/ You can also read Anders Monsen's introduction to FileMaker 17's new Admin API. FMS ACE and Schedules Half of the FileMaker Server 17 Admin API calls involve interacting with schedules (backup, script, send message, and verify database schedules). Let’s look at how FMS ACE leverages this new API, and in particular the schedule calls, to work with FileMaker server schedules. When you launch FMS ACE in a web browser, you are prompted for the credentials to connect to your FileMaker Server (the Admin Console credentials). This is very similar to logging into the native Admin Console. Once you log in, FMS ACE automatically pulls the data for all schedules in the FileMaker Server that you connect to, and it displays them in this fashion. Calendar displays "week" view by default (click image to enlarge) Behind the scenes, FMS ACE uses the "schedules" Admin API call to obtain all schedules in the server. It uses the free, open source FullCalendar JavaScript library to visualize data returned by the Admin API in a calendar view. Note that the different schedule types are color-coded, and you can choose to filter by schedule type, displaying only some schedule types at a time. "Month" calendar view filtered to show only Backup and Verify schedules (click image to enlarge) You can also create "verify database" and "send message" schedules from FMS ACE, two schedule types that can no longer be created from the native Admin Console. Popup to add a Verify Database schedule (click image to enlarge) To view the detail for a schedule, click on it. To delete it, click "Delete" in the details popup. Popup showing schedule detail and ability to delete schedule (click image to enlarge) How can you get your hands on this app, you are wondering? Utilizing FMS ACE as an End User Download the latest version of the built JavaScript app from GitHub Copy the "fms-ace" folder to the following location on your FileMaker Server machine: On Windows:
C:\Program Files\FileMaker\FileMaker Server\HTTPServer\conf\ On Mac:
/Library/FileMaker Server/HTTPServer/htdocs/httpsRoot/ That's it! Now you should be able to access FMS ACE on your FileMaker Server at the /fms-ace route, e.g. https://<your_server_here>/fms-ace Please note that the current version of FMS ACE is only supported on FileMaker Servers with a security certificate signed by a Certificate Authority (not with the self-signed certificate that ships with FMS 17). We have a ticket on our roadmap to try to overcome this limitation. Leveraging the Admin CLI Currently, we are relying entirely on the Admin API to interact with FileMaker Server. In the future, FMS ACE could also leverage the Admin CLI to interact with FileMaker Server. As mentioned earlier, some functionality that previously existed in the native Admin Console is now available in the Admin API and/or Admin CLI. It would be possible to issue Admin CLI commands from FMS ACE, thus making this app a central place from which to interact with FileMaker Server. The future of FMS ACE depends on interest within the FileMaker community, so please let us know what you think. You can even extend FMS ACE yourself, as we are releasing it as an open source project! Contributing to FMS ACE as a Developer The project is available in GitHub Please refer to the Contributing Guidelines if you would like to extend FMS ACE Final thoughts The administrative tools available in FileMaker Server 17 represent a significant departure from previous versions of FileMaker Server. We believe that FMS ACE can grow to be a handy tool for FileMaker Server administrators. Please connect with us on GitHub and let us know what you think about FMS ACE. Could this be a useful tool for you? What new features in FMS ACE would you be most interested in? If you're interested in other ways to integrate web services with your FileMaker solution, contact our team. We can provide insights on the best way to move forward. The post JavaScript App Extension to the FileMaker Server 17 Admin Console appeared first on Soliant Consulting.
Voir le billet d'origine